『Vanishing Gradients』のカバーアート

Vanishing Gradients

Vanishing Gradients

著者: Hugo Bowne-Anderson
無料で聴く

このコンテンツについて

A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson. It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.© 2025 Hugo Bowne-Anderson
エピソード
  • Episode 49: Why Data and AI Still Break at Scale (and What to Do About It)
    2025/06/05
    If we want AI systems that actually work in production, we need better infrastructure—not just better models. In this episode, Hugo talks with Akshay Agrawal (Marimo, ex-Google Brain, Netflix, Stanford) about why data and AI pipelines still break down at scale, and how we can fix the fundamentals: reproducibility, composability, and reliable execution. They discuss: 🔁 Why reactive execution matters—and how current tools fall short 🛠️ The design goals behind Marimo, a new kind of Python notebook ⚙️ The hidden costs of traditional workflows (and what breaks at scale) 📦 What it takes to build modular, maintainable AI apps 🧪 Why debugging LLM systems is so hard—and what better tooling looks like 🌍 What we can learn from decades of tools built for and by data practitioners Toward the end of the episode, Hugo and Akshay walk through two live demos: Hugo shares how he’s been using Marimo to prototype an app that extracts structured data from world leader bios, and Akshay shows how Marimo handles agentic workflows with memory and tool use—built entirely in a notebook. This episode is about tools, but it’s also about culture. If you’ve ever hit a wall with your current stack—or felt like your tools were working against you—this one’s for you. LINKS * marimo | a next-generation Python notebook (https://marimo.io/) * SciPy conference, 2025 (https://www.scipy2025.scipy.org/) * Hugo's face Marimo World Leader Face Embedding demo (https://www.youtube.com/watch?v=DO21QEcLOxM) * Vanishing Gradients YouTube Channel (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) * Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) * Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/ai-as-a-civilizational-technology) * Watch the podcast here on YouTube! (https://youtu.be/wU82fz4iRfo) 🎓 Want to go deeper? Check out Hugo's course: Building LLM Applications for Data Scientists and Software Engineers. Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in. This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful. Includes over $800 in compute credits and guest lectures from experts at DeepMind, Moderna, and more. Cohort starts July 8 — Use this link for a 10% discount (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10)
    続きを読む 一部表示
    1 時間 22 分
  • Episode 1: Introducing Vanishing Gradients
    2022/02/16
    In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis! Original music, bleeps, and blops by local Sydney legend PlaneFace (https://planeface.bandcamp.com/album/fishing-from-an-asteroid)!
    続きを読む 一部表示
    5 分
  • Episode 48: HOW TO BENCHMARK AGI WITH GREG KAMRADT
    2025/05/23
    If we want to make progress toward AGI, we need a clear definition of intelligence—and a way to measure it. In this episode, Hugo talks with Greg Kamradt, President of the ARC Prize Foundation, about ARC-AGI: a benchmark built on Francois Chollet’s definition of intelligence as “the efficiency at which you learn new things.” Unlike most evals that focus on memorization or task completion, ARC is designed to measure generalization—and expose where today’s top models fall short. They discuss: 🧠 Why we still lack a shared definition of intelligence 🧪 How ARC tasks force models to learn novel skills at test time 📉 Why GPT-4-class models still underperform on ARC 🔎 The limits of traditional benchmarks like MMLU and Big-Bench ⚙️ What the OpenAI O₃ results reveal—and what they don’t 💡 Why generalization and efficiency, not raw capability, are key to AGI Greg also shares what he’s seeing in the wild: how startups and independent researchers are using ARC as a North Star, how benchmarks shape the frontier, and why the ARC team believes we’ll know we’ve reached AGI when humans can no longer write tasks that models can’t solve. This conversation is about evaluation—not hype. If you care about where AI is really headed, this one’s worth your time. LINKS * ARC Prize -- What is ARC-AGI? (https://arcprize.org/arc-agi) * On the Measure of Intelligence by François Chollet (https://arxiv.org/abs/1911.01547) * Greg Kamradt on Twitter (https://x.com/GregKamradt) * Hugo's High Signal Podcast with Fei-Fei Li (https://high-signal.delphina.ai/episode/fei-fei-on-how-human-centered-ai-actually-gets-built) * Vanishing Gradients YouTube Channel (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) * Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) * Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/ai-as-a-civilizational-technology) * Watch the podcast here on YouTube! (https://youtu.be/wU82fz4iRfo) 🎓 Want to go deeper? Check out Hugo's course: Building LLM Applications for Data Scientists and Software Engineers. Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in. This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful. Includes over $800 in compute credits and guest lectures from experts at DeepMind, Moderna, and more. Cohort starts July 8 — Use this link for a 10% discount (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10)
    続きを読む 一部表示
    1 時間 4 分

Vanishing Gradientsに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。