• AI Safety and Alignment with Amal Iyer

  • 2024/03/07
  • 再生時間: 57 分
  • ポッドキャスト

AI Safety and Alignment with Amal Iyer

  • サマリー

  • In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.

    Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

    続きを読む 一部表示

あらすじ・解説

In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI.

Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

AI Safety and Alignment with Amal Iyerに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。