-
サマリー
あらすじ・解説
Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.EquiStamp is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp Discord server and message Chris directly; oh, and let him know muckrAIkers sent you!(00:00) - Recording date (00:05) - Intro (00:29) - Hot off the press (02:17) - Introducing Chris Canal (19:12) - World/risk models (35:21) - Competencies + decision making power (42:09) - Breaking models down (01:05:06) - Timelines, test time compute (01:19:17) - Moving goalposts (01:26:34) - Risk management pre-AGI (01:46:32) - Happy endings (01:55:50) - Causal chains (02:04:49) - Appetite for democracy (02:20:06) - Tech-frame based fallacies (02:39:56) - Bringing back real capitalism (02:45:23) - Orthogonality Thesis (03:04:31) - Why we do this (03:15:36) - Equistamp!LinksEquiStampChris's TwitterMETR Paper - RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human expertsAll Trades article - Learning from History: Preventing AGI Existential Risks through Policy by Chris CanalBetter Systems article - The Omega Protocol: Another Manhattan ProjectSuperintelligence & CommentaryWikipedia article - Superintelligence: Paths, Dangers, Strategies by Nick BostromReflective Altruism article - Against the singularity hypothesis (Part 5: Bostrom on the singularity)Into AI Safety Interview - Scaling Democracy w/ Dr. Igor KrawczukReferenced SourcesBook - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human FallibilityArtificial Intelligence Paper - Reward is EnoughWikipedia article - Capital and Ideology by Thomas PikettyWikipedia article - PantheonLeCun on AGI"Won't Happen" - Time article - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk"But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms Blogpost - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AIOther SourcesStanford CS Senior Project - Timing Attacks on Prompt Caching in Language Model APIsTechCrunch article - AI researcher François Chollet founds a new AI lab focused on AGIWhite House Fact Sheet - Ensuring U.S. Security and Economic Strength in the Age of Artificial IntelligenceNew York Post article - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’OpenEdition Academic Review of Thomas PikettyNeural Processing Letters Paper - A Survey of Encoding Techniques for Signal Processing in Spiking Neural NetworksBFI Working Paper - Do Financial Concerns Make Workers Less Productive?No Mercy/No Malice article - How to Survive the Next Four Years by Scott Galloway
activate_buybox_copy_target_t1