• Using Role-Playing Scenarios to Identify Bias in LLMs

  • 2024/09/16
  • 再生時間: 45 分
  • ポッドキャスト

Using Role-Playing Scenarios to Identify Bias in LLMs

  • サマリー

  • Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

    続きを読む 一部表示

あらすじ・解説

Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

Using Role-Playing Scenarios to Identify Bias in LLMsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。