Future Matters

著者: Matthew van der Merwe Pablo Stafforini
  • サマリー

  • Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
    Copyright 2024
    続きを読む 一部表示

あらすじ・解説

Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe & Pablo Stafforini.
Copyright 2024
エピソード
  • #8: Bing Chat, AI labs on safety, and pausing Future Matters
    2023/03/21

    Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

    00:00 Welcome to Future Matters. 00:44 A message to our readers. 01:09 All things Bing. 05:27 Summaries. 14:20 News. 16:10 Opportunities. 17:19 Audio & video. 18:16 Newsletters. 18:50 Conversation with Tom Davidson. 19:13 The importance of understanding and forecasting AI takeoff dynamics. 21:55 Start and end points of AI takeoff. 24:25 Distinction between capabilities takeoff and impact takeoff. 25:47 The ‘compute-centric framework’ for AI forecasting. 27:12 How the compute centric assumption could be wrong. 29:26 The main lines of evidence informing estimates of the effective FLOP gap. 34:23 The main drivers of the shortened timelines in this analysis. 36:52 The idea that we'll be "swimming in runtime compute" by the time we’re training human-level AI systems. 37:28 Is the ratio between the compute required for model training vs. model inference relatively stable? 40:37 Improving estimates of AI takeoffs.

    続きを読む 一部表示
    42 分
  • #7: AI timelines, AI skepticism, and lock-in
    2023/02/03

    Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

    00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?

    続きを読む 一部表示
    1分未満
  • #6: FTX collapse, value lock-in, and counterarguments to AI x-risk
    2022/12/30

    Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?

    続きを読む 一部表示
    38 分

Future Mattersに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。