エピソード

  • AI robot surgeon that corrects its own mistakes
    2025/08/20

    Link to the preprint discussed: https://arxiv.org/pdf/2505.10251

    Link to the project with explanations: https://h-surgical-robot-transformer.github.io/


    A surgical robot that corrects its own mistakes sounds like science fiction.

    In this paper, new research from Johns Hopkins & Stanford makes it a reality. But is it ready for the operating room?

    The new SRT-H system allows a da Vinci robot to autonomously perform key steps of a gallbladder removal, achieving a 100% success rate in a lab setting. It can even identify and correct its own errors in real-time—a huge leap for surgical AI.

    But the biggest challenge isn't executing a perfect plan; it's managing the messy, unpredictable reality of a live patient.

    In the latest episode of The Health AI Brief podcast, we break down:

    - The gap between lab performance and clinical reality.

    - The crucial shift from chasing full autonomy to proving ultra-reliable, supervised autonomy.

    It's a really interesting and impressive application of AI. This isn't just about technology. It's about building trust, managing risk, and creating AI that surgeons can actually rely on.

    Authors of the work: Ji Woong (Brian) Kim1,2, Juo-Tung Chen1, Pascal Hansen1, Lucy X. Shi2, Antony Goldenberg1, Samuel Schmidgall1, Paul Maria Scheikl1, Anton Deguet1, Brandon M. White1, De Ru Tsai3, Richard Cha3, Jeffrey Jopling1, Chelsea Finn2, Axel Krieger1

    1 Johns Hopkins University, 2 Stanford University, 3 Optosurgical

    #AIinHealthcare #SurgicalRobotics #AutonomousSurgery #HealthTech #DigitalHealth #MedTech #AIinSurgery #MachineLearning #daVinciSurgery #PatientSafety #FutureofMedicine #ClinicalInnovation #JohnsHopkins #Stanford

    Music generated by Mubert https://mubert.com/render


    healthaibrief@outlook.com

    続きを読む 一部表示
    5 分
  • 010 The AI Tipping Point in Medicine - Why Now?
    2025/08/16

    AI in medicine has reached a clear tipping point. But what are the specific factors driving this rapid progress? This episode breaks down the three essential pillars: the explosion in clinical data, massive leaps in computation, and recent, powerful breakthroughs in algorithms.

    We explore how mature algorithms from outside of medicine, particularly in image and natural language processing, are now being repurposed for clinical use. You'll also learn why the biggest hurdles for AI in healthcare are no longer necessarily the algorithms themselves, but the practical challenges of accessing high-quality clinical data, system integration, and the costs of computation.

    This is your essential primer on the core components of modern clinical AI, providing the foundation needed to evaluate new health tech tools.

    Keywords: AI in Healthcare, Machine Learning, Digital Health, Clinical Data, Algorithms, Computation, Medical Imaging, AI for Doctors, AI in medicine

    Music generated by Mubert https://mubert.com/render


    healthaibrief@outlook.com

    続きを読む 一部表示
    4 分
  • Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy - Does 'helpful' tech actually make us worse?
    2025/08/13

    A new study from The Lancet that has sent a ripple of anxiety through the clinical AI community. The paper suggests that AI tools designed to help doctors may actually cause their skills to decline over time.

    But is the evidence as solid as the headlines suggest? Is AI dependency a real threat to patient safety?

    #HealthAI #ArtificialIntelligence #ClinicalAI #PatientSafety #Deskilling #DigitalHealth #MedTech #Colonoscopy #Gastroenterology #TheLancet #NHS #DeepMind #MedicalPodcast #HealthcareInnovation

    Music generated by Mubert https://mubert.com/render


    healthaibrief@outlook.com

    続きを読む 一部表示
    5 分
  • 009 AI Agents: Automating Healthcare or a New Clinical Safety Risk?
    2025/08/12

    AI that can do, not just tell. We explore AI Agents: systems that go beyond diagnosis to take action. This leap forward promises to tackle our admin overload but brings a new level of clinical risk. Are we ready?


    Music generated by Mubert https://mubert.com/render

    AI Agents, Healthcare AI, Clinical Safety, Physician Burnout, Automation, Patient Safety, Autonomous AI, Human-in-the-Loop, AI Risk, Workflow Automation, Future of Medicine, AI Hallucination, Administrative Tasks

    healthaibrief@outlook.com

    続きを読む 一部表示
    3 分
  • 008 Narrow vs General AI, AGI and ASI
    2025/08/09

    We see AI that can read an ECG, but we also hear headlines about a future superintelligence. How do these two realities connect?


    In this episode, we provide an essential reality check. We break down the crucial difference between the AI we have in our clinics today (Narrow AI) and the AI of science fiction (AGI & ASI).


    Understanding this spectrum is key. It helps you ground your expectations when a new tool is presented for your hospital and separates the practical task of clinical validation from the long-term ethical debates.


    🎧 Listen now to cut through the hype and understand the real limits of the AI you'll encounter in your practice.


    Music generated by Mubert https://mubert.com/render

    #AIinHealthcare #DigitalHealth #NarrowAI #AGI #ArtificialIntelligence #FutureofMedicine #MedEd #HealthTech


    healthaibrief@outlook.com

    続きを読む 一部表示
    4 分
  • OpenAI's New Release: Why 'Open-Weights' Isn't 'Open-Source' - And Why They're Both Relevant For Clinicians
    2025/08/06

    You’ve seen the headlines about OpenAI’s new model, but much of the coverage is confusing 'open-weights' with 'open-source'. They are not the same, and the distinction is relevant for patient data security and clinical trust.


    In this episode of The Health AI Brief, we decode some of the jargon. Learn:

    - The fundamental difference between open-weights and true open-source AI.

    - The implications for patient privacy and data security when running models locally.

    - The key question you must ask any vendor about their "open" AI model.


    Tune in to understand the risks and benefits before these tools arrive in your hospital.


    AI in Healthcare, Open-Source AI, Open-Weights AI, OpenAI, LLM, Healthtech, Digital Health, Clinical AI, Patient Data, Data Privacy, Data Security, GDPR, HIPAA, Medical AI, Artificial Intelligence, Machine Learning.

    Music generated by Mubert https://mubert.com/render


    healthaibrief@outlook.com

    続きを読む 一部表示
    4 分
  • 007 Parameters
    2025/08/06

    You hear about new AI models having "billions of parameters." It sounds impossibly complex, but the core idea is surprisingly simple and it's the single most important concept for understanding how an AI actually works. These parameters determine an AI's capabilities, its limitations, and its potential for bias.

    In this episode of The Health AI Brief we'll cover:

    - A simple, intuitive analogy for what a 'parameter' actually is.

    - How the process of 'training' an AI is really just about adjusting these billions of settings.

    - Why understanding this concept helps you cut through the hype and critically appraise the AI tools being marketed to clinicians.

    This is a foundational concept. Grasp this, and you'll have a much clearer view of the technology poised to change our practice.

    AI in Healthcare, AI Parameters, Machine Learning, Neural Networks, Large Language Models (LLM), Healthtech, Digital Health, Clinical AI, AI Training, Model Architecture, Artificial Intelligence, Medical AI Explained.

    Music generated by Mubert https://mubert.com/render

    healthaibrief@outlook.com

    続きを読む 一部表示
    3 分
  • 006 Deep learning
    2025/08/03

    How can an AI analyse a CT scan or pathology slide with expert-level accuracy? The answer is Deep Learning—the engine behind the revolution in medical imaging.


    In this episode, we explain how these 'deep' neural networks teach themselves to see complex patterns, much like our own visual cortex processes information from simple edges to complex objects.


    Knowing this matters. It helps you understand why these models are often "black boxes," why appraising their training data is so critical, and why they are hyper-specialised for a single task.


    🎧 Listen now to understand the technology powering the next generation of medical imaging tools.


    Music generated by Mubert https://mubert.com/render

    #DeepLearning #AIinMedicine #MedicalImaging #Radiology #Pathology #HealthTech #NeuralNetworks #ClinicalAI #MedEd


    healthaibrief@outlook.com


    続きを読む 一部表示
    4 分