• #3: digital sentience, AGI ruin, and forecasting track records

  • 2022/07/04
  • 再生時間: 1分未満
  • ポッドキャスト

#3: digital sentience, AGI ruin, and forecasting track records

  • サマリー

  • Episode Notes

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

    続きを読む 一部表示

あらすじ・解説

Episode Notes

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

#3: digital sentience, AGI ruin, and forecasting track recordsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。