『Explainability of AI』のカバーアート

Explainability of AI

Explainability of AI

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!

Explainability of AIに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。