エピソード

  • #8: Bing Chat, AI labs on safety, and pausing Future Matters
    2023/03/21

    Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

    00:00 Welcome to Future Matters. 00:44 A message to our readers. 01:09 All things Bing. 05:27 Summaries. 14:20 News. 16:10 Opportunities. 17:19 Audio & video. 18:16 Newsletters. 18:50 Conversation with Tom Davidson. 19:13 The importance of understanding and forecasting AI takeoff dynamics. 21:55 Start and end points of AI takeoff. 24:25 Distinction between capabilities takeoff and impact takeoff. 25:47 The ‘compute-centric framework’ for AI forecasting. 27:12 How the compute centric assumption could be wrong. 29:26 The main lines of evidence informing estimates of the effective FLOP gap. 34:23 The main drivers of the shortened timelines in this analysis. 36:52 The idea that we'll be "swimming in runtime compute" by the time we’re training human-level AI systems. 37:28 Is the ratio between the compute required for model training vs. model inference relatively stable? 40:37 Improving estimates of AI takeoffs.

    続きを読む 一部表示
    42 分
  • #7: AI timelines, AI skepticism, and lock-in
    2023/02/03

    Future Matters is a newsletter about longtermism and existential risk by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish.

    00:00 Welcome to Future Matters. 00:57 Davidson — What a compute-centric framework says about AI takeoff speeds. 02:19 Chow, Halperin & Mazlish — AGI and the EMH. 02:58 Hatfield-Dodds — Concrete reasons for hope about AI. 03:37 Karnofsky — Transformative AI issues (not just misalignment). 04:08 Vaintrob — Beware safety-washing. 04:45 Karnofsky — How we could stumble into AI catastrophe. 05:21 Liang & Manheim — Managing the transition to widespread metagenomic monitoring. 05:51 Crawford — Technological stagnation: why I came around. 06:38 Karnofsky — Spreading messages to help with the most important century. 07:16 Wynroe Atkinson & Sevilla — Literature review of transformative artificial intelligence timelines. 07:50 Yagudin, Mann & Sempere — Update to Samotsvety AGI timelines. 08:15 Dourado — Heretical thoughts on AI. 08:43 Browning & Veit — Longtermism and animals. 09:04 One-line summaries. 10:28 News. 14:13 Conversation with Lukas Finnveden. 14:37 Could you clarify what you mean by AGI and lock-in? 16:36 What are the five claims one could make about the long run trajectory of intelligent life? 18:26 What are the three claims about lock-in, conditional on the arrival of AGI? 20:21 Could lock-in still happen without whole brain emulation? 21:32 Could you explain why the form of alignment required for lock-in would be easier to solve? 23:12 Could you elaborate on the stability of the postulated long-lasting institutions and on potential threats? 26:02 Do you have any thoughts on the desirability of long-term lock-in? 28:24 What’s the story behind this report?

    続きを読む 一部表示
    1分未満
  • #6: FTX collapse, value lock-in, and counterarguments to AI x-risk
    2022/12/30

    Future Matters is a newsletter about longtermism by Matthew van der Merwe and Pablo Stafforini. Each month we curate and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Future Matters is also available in Spanish. 00:00 Welcome to Future Matters. 01:05 A message to our readers. 01:54 Finnveden, Riedel & Shulman — Artificial general intelligence and lock-in. 02:33 Grace — Counterarguments to the basic AI x-risk case. 03:17 Grace — Let’s think about slowing down AI. 04:18 Piper — Review of What We Owe the Future. 05:04 Clare & Martin — How bad could a war get? 05:26 Rodríguez — What is the likelihood that civilizational collapse would cause technological stagnation? 06:28 Ord — What kind of institution is needed for existential security? 07:00 Ezell — A lunar backup record of humanity. 07:37 Tegmark — Why I think there's a one-in-six chance of an imminent global nuclear war. 08:31 Hobbhahn — The next decades might be wild. 08:54 Karnosfky — Why would AI "aim" to defeat humanity? 09:44 Karnosfky — High-level hopes for AI alignment. 10:27 Karnosfky — AI safety seems hard to measure. 11:10 Karnosfky — Racing through a minefield. 12:07 Barak & Edelman — AI will change the world, but won’t take it over by playing “3-dimensional chess”. 12:53 Our World in Data — New page on artificial intelligence. 14:06 Luu — Futurist prediction methods and accuracy. 14:38 Kenton et al. — Clarifying AI x-risk. 15:39 Wyg — A theologian's response to anthropogenic existential risk. 16:12 Wilkinson — The unexpected value of the future. 16:38 Aaronson — Talk on AI safety. 17:20 Tarsney & Wilkinson — Longtermism in an infinite world. 18:13 One-line summaries. 25:01 News. 28:29 Conversation with Katja Grace. 28:42 Could you walk us through the basic case for existential risk from AI? 29:42 What are the most important weak points in the argument? 30:37 Comparison between misaligned AI and corporations. 32:07 How do you think people in the AI safety community are thinking about this basic case wrong? 33:23 If these arguments were supplemented with clearer claims, does that rescue some of the plausibility? 34:30 Does the disagreement about basic intuitive case for AI risk undermine the case itself? 35:34 Could describe how your views on AI risk have changed over time? 36:14 Could you quantify your credence in the probability of existential catastrophe from AI? 36:52 When you reached that number, did it surprise you?

    続きを読む 一部表示
    38 分
  • #5: supervolcanoes, AI takeover, and What We Owe the Future
    2022/09/13

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?

    続きを読む 一部表示
    31 分
  • #4: AI timelines, AGI risk, and existential risk from climate change
    2022/08/07

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters 01:11 Steinhardt — AI forecasting: one year in 01:52 Davidson — Social returns to productivity growth 02:26 Brundage — Why AGI timeline research/discourse might be overrated 03:03 Cotra — Two-year update on my personal AI timelines 03:50 Grace — What do ML researchers think about AI in 2022? 04:43 Leike — On the windfall clause 05:35 Cotra — Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover 06:32 Maas — Introduction to strategic perspectives on long-term AI governance 06:52 Hadshar — How moral progress happens: the decline of footbinding as a case study 07:35 Trötzmüller — Why EAs are skeptical about AI safety 08:08 Schubert — Moral circle expansion isn’t the key value change we need 08:52 Šimčikas — Wild animal welfare in the far future 09:51 Heikkinen — Strong longtermism and the challenge from anti-aggregative moral views 10:28 Rational Animations — Video on Karnofsky's Most important century 11:23 Other research 12:47 News 15:00 Conversation with John Halstead 15:33 What level of emissions should we reasonably expect over the coming decades? 18:11 What do those emissions imply for warming? 20:52 How worried should we be about the risk of climate change from a longtermist perspective? 26:53 What is the probability of an existential catastrophe due to climate change? 27:06 Do you think EAs should fund modelling work of tail risks from climate change? 28:45 What would be the best use of funds?

    続きを読む 一部表示
    31 分
  • #3: digital sentience, AGI ruin, and forecasting track records
    2022/07/04
    Episode Notes

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:00 Welcome to Future Matters 01:11 Long — Lots of links on LaMDA 01:48 Lovely — Do we need a better understanding of 'progress'? 02:11 Base — Things usually end slowly 02:47 Yudkowsky — AGI ruin: a list of lethalities 03:38 Christiano — Where I agree and disagree with Eliezer 04:31 Garfinkel — On deference and Yudkowsky's AI risk estimates 05:13 Karnofsky — The track record of futurists seems … fine 06:08 Aaronson — Joining OpenAI to work on AI safety 06:52 Shiller — The importance of getting digital consciousness right 07:53 Pilz's — Germans opinions on translations of “longtermism” 08:33 Karnofsky — AI could defeat all of us combined 09:36 Beckstead — Future Fund June 2022 update 11:02 News 14:45 Conversation with Robert Long 15:05 What artificial sentience is and why it’s important 16:56 “The Big Question” and the assumptions on which it depends 19:30 How problems arising from AI agency and AI sentience compare in terms of importance, neglectedness, tractability 21:57 AI sentience and the alignment problem 24:01 The Blake Lemoine saga and the quality of the ensuing public discussion 26:29 The risks of AI sentience becoming lumped in with certain other views 27:55 How to deal with objections coming from different frameworks 28:50 The analogy between AI sentience and animal welfare 30:10 The probability of large language models like LaMDA and GPT-3 being sentient 32:41 Are verbal reports strong evidence for sentience?

    続きを読む 一部表示
    1分未満
  • #2: Clueless skepticism, 'longtermist' as an identity, and nanotechnology strategy research
    2022/05/28

    Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

    00:01 Welcome to Future Matters 01:25 Schubert — Against cluelessness 02:23 Carlsmith — Presentation on existential risk from power-seeking AI 03:45 Vaintrob — Against "longtermist" as an identity 04:30 Bostrom & Shulman — Propositions concerning digital minds and society 05:02 MacAskill — EA and the current funding situation 05:51 Beckstead — Some clarifications on the Future Fund's approach to grantmaking 06:46 Caviola, Morrisey & Lewis — Most students who would agree with EA ideas haven't heard of EA yet 07:32 Villalobos & Sevilla — Potatoes: A critical review 08:09 Ritchie — How we fixed the ozone layer 08:57 Snodin — Thoughts on nanotechnology strategy research 09:31 Cotton-Barratt — Against immortality 09:50 Smith & Sandbrink — Biosecurity in the age of open science 10:30 Cotton-Barratt — What do we want the world to look like in 10 years? 10:52 Hilton — Climate change: problem profile 11:30 Ligor & Matthews — Outer space and the veil of ignorance 12:21 News 14:46 Conversation with Ben Snodin

    続きを読む 一部表示
    23 分
  • #1: AI takeoff, longtermism vs. existential risk, and probability discounting
    2022/04/23
    The remedies for all our diseases will be discovered long after we are dead; and the world will be made a fit place to live in. It is to be hoped that those who live in those days will look back with sympathy to their known and unknown benefactors. — John Stuart Mill Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent longtermist. You can also subscribe on Substack, read on the EA Forum and follow on Twitter. Research Scott Alexander's "Long-termism" vs. "existential risk" worries that “longtermism” may be a worse brand (though not necessarily a worse philosophy) than “existential risk”. It seems much easier to make someone concerned about transformative AI by noting that it might kill them and everyone else, than by pointing out its effects on people in the distant future. We think that Alexander raises a valid worry, although we aren’t sure the worry favors the “existential risk” branding over the “longtermism” branding as much as he suggests: existential risks are, after all, defined as risks to humanity's long-term potential. Both of these concepts, in fact, attempt to capture the core idea that what ultimately matters is mostly located in the far future: existential risk uses the language of “potential” and emphasizes threats to it, whereas longtermism instead expresses the idea in terms of value and the duties it creates. Maybe the “existential risk” branding seems to address Alexander’s worry better because it draws attention to the threats to this value, which are disproportionately (but not exclusively) located in the short-term, while the “longtermism” branding emphasizes instead the determinants of value, which are in the far future. In General vs AI-specific explanations of existential risk neglect, Stefan Schubert asks why we systematically neglect existential risk. The standard story invokes general explanations, such as cognitive biases and coordination problems. But Schubert notes that people seem to have specific biases that cause them to underestimate AI risk, e.g. it sounds outlandish and counter-intuitive. If unaligned AI is the greatest source of existential risk in the near-term, then these AI-specific biases could explain most of our neglect. Max Roser’s The future is vast is a powerful new introduction to longtermism. His graphical representations do well to convey the scale of humanity’s potential, and have made it onto the Wikipedia entry for longtermism. Thomas Kwa’s Effectiveness is a conjunction of multipliers makes the important observation that (1) a person’s impact can be decomposed into a series of impact “multipliers” and that (2) these terms interact multiplicatively, rather than additively, with each other. For example, donating 80% instead of 10% multiplies impact by a factor of 8 and earning $1m/year instead of $250k/year multiplies impact by a factor of 4; but doing both of these things multiplies impact by a factor of 32. Kwa shows that many other common EA choices are best seen as multipliers of impact, and notes that multipliers related to judgment and ambition are especially important for longtermists. The first installment in a series on “learning from crisis”, Jan Kulveit's Experimental longtermism: theory needs data (co-written with Gavin Leech) recounts the author's motivation to launch Epidemic Forecasting, a modelling and forecasting platform that sought to present probabilistic data to decisionmakers and the general public. Kulveit realized that his "longtermist" models had relatively straightforward implications for the COVID pandemic, such that trying to apply them to this case (1) had the potential to make a direct, positive difference to the crisis and (2) afforded an opportunity to experimentally test those models. While the first of these effects had obvious appeal, Kulveit considers the second especially important from a longtermist perspective: attempts to think about the long-term future lack rapid feedback loops, and disciplines that aren't tightly anchored to empirical reality are much more likely to go astray. He concludes that longtermists should engage more often in this type of experimentation, and generally pay more attention to the longtermist value of information that "near-termist" projects can sometimes provide. Rhys Lindmark’s FTX Future Fund and Longtermism considers the significance of the Future Fund within the longtermist ecosystem by examining trends in EA funding over time. Interested readers should look at the charts in the original post for more details, but roughly it looks like Open Philanthropy has allocated about 20% of its budget to longtermist causes in recent years, accounting for about 80% of all longtermist grantmaking. On the assumption that Open Phil ...
    続きを読む 一部表示
    30 分