Folgen

  • Preventing AI Hallucinations in Healthcare: How Specialized LLMs can Transform Drug Safety
    Dec 25 2025

    AI in healthcare can save lives-or put them at risk. This episode explores guardrails, safety LLMs, regulation, and why generic AI controls fail in clinical settings.


    Timestamps:

    00:00 Introduction

    01:26 What are LLM guardrails and why do they matter in healthcare

    02:36 Why AI hallucinations are dangerous in medical settings

    03:47 Why people still use chatbots for medical advice

    05:13 Why generic AI safety tools fail in healthcare

    06:16 Regulation pressure: US vs Europe

    09:03 Guardrail frameworks: Guardrails AI, NeMo, Llama Guard

    15:08 Safety LLMs and red teaming medical AI

    22:17 Why healthcare AI needs application-specific testing

    27:49 Shift-left AI safety and responsible design

    32:44 The ELIZA effect

    37:27 Practical advice for teams building healthcare AI


    𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗳𝗼𝗿 𝗟𝗶𝘀𝘁𝗲𝗻𝗲𝗿𝘀 ►

    Papers:

    - Hakim et al. (2025) "The need for guardrails with large language models in pharmacovigilance."

    - Meta's Llama Guard paper: "Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations" (arXiv:2312.06674)

    - Ayala-Lauroba et al. (2024) "Enhancing Guardrails for Safe and Secure Healthcare AI" (arXiv:2409.17190)


    Code and Models:

    - Hakim et al. analysis code: https://github.com/jlpainter/llm-guardrails/

    - Llama Guard: Available on Hugging Face (requires approval)

    - gpt-oss-safeguard: https://huggingface.co/openai/gpt-oss-safeguard-20b (Apache 2.0)


    Medical Ontologies:

    - MedDRA (Medical Dictionary for Regulatory Activities): https://www.meddra.org/

    - WHO Drug Dictionary: Via Uppsala Monitoring Centre


    Regulatory Guidance:

    - EMA AI Reflection Paper: https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence

    - FDA AI Guidance: Available on FDA.gov



    LISTEN ON ►

    YouTube: https://youtu.be/IWoARQ0G7sg

    Apple Podcasts: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

    Spotify: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

    Amazon Music: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast



    FOLLOW ►

    Website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    X (Twitter): https://x.com/JohnSnowLabs


    #HealthcareAI #AIGuardrails #MedicalAI #AISafety #AIEthics #HealthTech #AIRegulation #DigitalHealth #AIinMedicine #MLOps #AICompliance #AIHallucinations

    Mehr anzeigen Weniger anzeigen
    41 Min.
  • Mission-Critical Healthcare Systems with Eugene Sayan @Softheon | The Healthcare AI Podcast (Ep. 6)
    Nov 18 2025

    Eugene Ugur Sayan is the Founder, CEO and President of Softheon, a technology company that powers healthcare.gov and state ACA exchanges serving over 10 million Americans daily.

    Eugene filed a patent on intelligent software agents in 1998, long before anyone was discussing "agentic" workflows. Over the years, they built the Massachusetts Connector (America's first state health exchange) and now power the infrastructure behind healthcare.gov, serving over 10 million Americans with 1,300+ AI agents running 24/7/365.


    This isn't theoretical AI. It's production systems where 99% accuracy means people lose health coverage. Eugene explains why healthcare demands "airline industry standards" (99.999% uptime), the PPT Framework (People, Process, Technology), how his team orchestrates agents across federal and 50-state AI regulations, and why Softheon owns its entire stack, from data centers to application layer technology.


    Timestamps:

    00:00 Opening & Introduction

    02:22 The 1998 Patent: Building agentic workflows before ChatGPT

    06:40 Why Healthcare AI Requires 99.999% Accuracy

    10:00 Autonomy, Alignment & Accountability Framework

    12:26 1,300 Semi-Autonomous Agents & Human-in-the-Loop (HITL)

    18:49 Three Layers of AI: Hardware, Platform, Applications

    23:09 Biggest Challenges: People, Process & Technology

    31:04 Breaking Innovation Barriers in Conservative Healthcare

    35:44 Transparency Rules & Value-Based Care

    39:08 The ICHRA Revolution: Healthcare's 401K Moment

    44:00 Consumer Engagement: Three Pillars Strategy

    50:00 Entrepreneurship Philosophy & Daily Practice

    53:00 Final Advice


    Listen On:

    YouTube: https://youtu.be/IWoARQ0G7sg

    Apple Podcasts: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

    Spotify: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

    Amazon Music: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast


    You can learn more about the All-in-One Solution for Health Plans at Softheon.com


    Follow Eugene Sayan:

    LinkedIn: https://www.linkedin.com/in/eugenesayan/

    Twitter: https://x.com/SayanEugene


    Connect With Us:

    Website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    X (Twitter): https://x.com/JohnSnowLabs


    #AI #HealthcareAI #AgenticAI #HealthTech #HealthcareInnovation

    Mehr anzeigen Weniger anzeigen
    59 Min.
  • A Survey of LLM-based Agents in Medicine: How far are we from Baymax?
    Sep 24 2025

    In this episode of The Healthcare AI Pod, we unpack the impact of LLM-based medical agents on modern medicine – from architecture and multi-agent design to regulation and real-world risks.

    Healthcare is facing a perfect storm: ageing populations, staff shortages, and rising costs. Can AI agents be the solution?

    We discuss insights from over 60 studies on medical LLMs, including key areas such as:

    • Multi-agent architectures and clinical decision support
    • The security dilemma: protecting patient data when your API is just text
    • Prompt injection attacks and HIPAA compliance challenges
    • Liability concerns in AI-powered healthcare

    From Baymax dreams to real-world implementation: how close are we?

    Timestamps

    0:00 Introduction – Baymax as inspiration for medical AI
    2:20 What are LLM-based medical agents and how they differ from models
    10:00 Healthcare security – regulation, compliance, and patient data
    14:50 Patient reliance on AI, prompt-hacking, and global access challenges
    18:00 Agent architectures – functional, role-based, and departmental approaches
    25:10 Task decomposition and subject-matter expert input
    28:00 Reward functions, accuracy vs user-pleasing bias, and physician training
    33:00 User experience – agent personalities and conversational design
    45:20 Liability, insurance, and evaluation of medical AI systems
    54:20 Future outlook – Baymax revisited, challenges, and opportunities ahead

    Mentioned Materials

    • A Survey of LLM-based Agents in Medicine: How far are we from Baymax? https://arxiv.org/abs/2502.11211
    • MAGDA: Multi-agent guideline-driven diagnostic assistance https://arxiv.org/abs/2409.06351


    Listen On

    • YouTube – https://youtu.be/R9h_Whj6sB0
    • Apple Podcasts – https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175
    • Spotify – https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t
    • Amazon Music – https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast


    Connect With Us

    • Our Website – https://www.johnsnowlabs.com/
    • LinkedIn – https://www.linkedin.com/company/johnsnowlabs/
    • Facebook – https://www.facebook.com/JohnSnowLabsInc/
    • X (Twitter) – https://x.com/JohnSnowLabs

    #HealthcareInnovation #AIAgents #HealthTech #MedicalAI #AIEthics #Baymax #MedicalLLM #HealthcareAI #ClinicalAI #MedicalTechnology #AIResearch #DigitalHealth #FutureOfMedicine #AIinMedicine #HealthcareAutomation #MedicalChatbots #PatientCare #HealthcareSolutions #MedicalInnovation

    Mehr anzeigen Weniger anzeigen
    57 Min.
  • The AI Governance Game-Changer: How to Create Bias-Free Healthcare Solutions?
    Aug 14 2025

    Can AI make healthcare feedback fairer and smarter? In Episode 4 of The Healthcare AI Podcast, Ben Webster (VP of AI Solutions at NLP Logix) and David Talby (CEO of John Snow Labs) dive into a game-changing approach to AI governance. Discover how LangTest tackles bias in processing 1.5M hospital feedback audio files annually, ensuring fair sentiment analysis and actionable insights. From eliminating gender bias in nurse vs. doctor feedback to building robust, ethical AI models, this episode is a must-watch for healthcare and AI innovators!


    Join the Conversation: What’s the biggest challenge in healthcare AI today? Comment below!


    Timestamps

    06:18 – Bias in patient-feedback NLP

    07:13 – LangTest & synthetic debiasing

    12:30 – Data contamination & custom benchmarks

    15:19 – Robustness testing & augmentation

    20:18 – Medical red-teaming & safety checks

    23:44 – Clinical cognitive biases in LLMs


    Listen on your favourite platform:

    • ⁠YouTube⁠: https://www.youtube.com/playlist?list=PL5zieHHAlvApZKkwtu746ivthRc5zyTiU

    • ⁠⁠Apple Podcast⁠⁠: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

    • ⁠⁠Spotify⁠⁠: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

    • Amazon Music⁠⁠: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast


    Connect with us:

    Our website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    X: https://x.com/JohnSnowLabs


    #AIinHealthcare #AIBias #EthicalAI #AIGovernance #NLP #HealthTech #PatientFeedback #HealthcareAI

    Mehr anzeigen Weniger anzeigen
    29 Min.
  • First Steps with Model Context Protocol (MCP). Healthcare use-cases
    Aug 5 2025

    Dive into Episode 3 of the Healthcare AI Podcast, where Vishnu Vettrivel and Alex Thomas explore the growing world of Model Context Protocol (MCP) with a focus on Healthcare MCP (HMCP) from Innovaccer. This episode breaks down the essentials of MCP, from converting papers to N-Triples to deploying on Claude Desktop. Learn about resources, prompts, and tools that empower AI models, plus key security considerations. Stick around for a call to action to spark your thoughts on agentic frameworks!


    Tune in to discover why MCP could be the next big leap for AI in Healthcare.


    Timestamps

    01:01 – Introducing the Model Context Protocol (MCP): Purpose & Core Concepts

    05:44 – Healthcare MCP (HMCP) by Innovaccer

    06:50 – Basics of MCP: Resources, Prompts, Tools

    10:50 – Demo: Deploying to Claude Desktop (Example MCP Project)

    22:24 – Healthcare Relevance & HMCP

    23:46 – Security & Limitations

    27:30 – Future Directions


    Listen on your favourite platform:

    • YouTube: https://www.youtube.com/playlist?list=PL5zieHHAlvApZKkwtu746ivthRc5zyTiU

    • ⁠⁠Apple Podcast⁠⁠: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

    • ⁠⁠Spotify⁠⁠: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

    • Amazon Music⁠⁠: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast


    Resources:

    - Model Context Protocol: https://modelcontextprotocol.io/overview

    - Introducing HMCP: A Universal, Open Standard for AI in Healthcare: https://innovaccer.com/resources/blogs/introducing-hmcp-a-universal-open-standard-for-ai-in-healthcare

    - We built the security layer MCP always needed: https://blog.trailofbits.com/2025/07/28/we-built-the-security-layer-mcp-always-needed/


    Connect with us:

    Our website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    X: https://x.com/JohnSnowLabs


    #MCP #ModelContextProtocol #HealthcareAI #MedicalData #AgenticAI #ClinicalAI #DataScience #HealthTech

    Mehr anzeigen Weniger anzeigen
    30 Min.
  • De-Identification in Multimodal Medical Data (Text, PDF, DICOM) to stay HIPAA & GDPR Compliant
    Jul 25 2025

    Explore regulatory‑grade multimodal data de‑identification and tokenisation with Youssef Mellah, PhD, Senior Data Scientist at John Snow Labs and Srikanth Kumar Rana, Solutions Architect at Databricks.

    Learn how to remove, mask or transform PHI across clinical notes, tables, PDFs and DICOMs at scale, while meeting HIPAA, GDPR and CCPA standards — all without sacrificing analytical value.


    Timestamps

    00:00 – Welcome & Episode Overview

    02:43 – How Databricks supports secure De‑identification workflows

    03:50 – Built-in techniques: masking, encryption, hashing

    05:26 – Introduction to Multimodal Data De-Identification

    07:15 – OCR + NLP pipeline for visual & text data – PHI Extraction

    08:35 – Notebook demo: PHI identification in clinical notes

    12:00 – PDF de-identification

    12:56 – DICOM file de-identification

    14:18 – Output: consistent masking across all modalities


    Listen on your favourite platform:

    • YouTube: https://www.youtube.com/playlist?list=PL5zieHHAlvApZKkwtu746ivthRc5zyTiU

    • ⁠⁠Apple Podcast⁠⁠: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

    • ⁠⁠Spotify⁠⁠: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

    • Amazon Music⁠⁠: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast


    Resources:

    • John Snow Labs Models Hub: https://nlp.johnsnowlabs.com/models

    • Spark NLP Workshop Repo: https://github.com/JohnSnowLabs/spark-nlp-workshop

    • Visual NLP Workshop Repo: https://github.com/JohnSnowLabs/visual-nlp-workshop

    • JSL Docs: https://nlp.johnsnowlabs.com/docs

    • JSL Live Demos: https://nlp.johnsnowlabs.com/demos

    • JSL Learning Hub: https://nlp.johnsnowlabs.com/learn


    Connect with us:

    Our website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    X: https://x.com/JohnSnowLabs


    #HealthcareAI #DataPrivacy #HIPAA #PHI #DeIdentification #MedicalAI #GDPR #HealthTech #MultimodalAI

    Mehr anzeigen Weniger anzeigen
    15 Min.
  • AI Meets Healthcare: Evaluating LLMs on Medical Tasks
    Jul 16 2025

    Welcome to the first episode of The Healthcare AI Podcast, presented by John Snow Labs!

    Join John Snow Labs CEO David Talby and CTO Veysel Kocaman, as they crack open the future of medicine.

    They’ll reveal how state-of-the-art Healthcare AI is transforming the industry, directly comparing leading Frontier LLMs like OpenAI's GPT-4.5, Anthropic's Claude 3.7 Sonnet, and John Snow Labs’ Medical LLM.

    Dive deep into critical clinical tasks, from summarization and information extraction to de-identification and clinical coding. You'll get expert insights from practicing doctors evaluating these models for factuality, relevance, and conciseness, demonstrating which AI truly delivers.

    Bonus, understand the significant cost differences and learn why private, on-premise deployment is a game-changer for data privacy and compliance. You'll walk away with a deeper knowledge of the models poised to revolutionize healthcare, ensuring accuracy and compliance in your AI initiatives.


    Episode Highlights & Timestamps:

    0:00 - Welcome & Episode Overview

    0:48 - Benchmarking Frontier LLMs & Clinical NLP

    2:00 - The Competitors: OpenAI, Anthropic, AWS, Azure, Google

    3:15 - Introducing John Snow Labs Medical LLMs

    6:42 - Why AI Evaluation is Critical in Healthcare

    9:48 - Blind Evaluation by Medical Doctors: Methodology

    15:12 - Overall Preference: John Snow Labs vs. GPT-4.5 & Claude Sonnet 3.7

    22:56 - Clinical Information Extraction Benchmarks

    27:08 - Advanced NLP: Named Entity Recognition (NER) Deep Dive

    29:53 - Assertion Status Detection: the crucial role of context (e.g., patient denies pain vs. father with Alzheimer's) and how different solutions compare in accuracy.

    35:37 - Medical Coding with RxNorm: the way of mapping clinical entities to standardized terminologies and the performance metrics for RxNorm.

    39:18 - The Clinical De-identification of PHI Data: the most critical privacy use case in healthcare


    Connect with us:

    Our website: https://www.johnsnowlabs.com/

    LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

    Facebook: https://www.facebook.com/JohnSnowLabsInc/

    Twitter: https://x.com/JohnSnowLabs

    Mehr anzeigen Weniger anzeigen
    47 Min.