Preventing AI Hallucinations in Healthcare: How Specialized LLMs can Transform Drug Safety Titelbild

Preventing AI Hallucinations in Healthcare: How Specialized LLMs can Transform Drug Safety

Preventing AI Hallucinations in Healthcare: How Specialized LLMs can Transform Drug Safety

Jetzt kostenlos hören, ohne Abo

Details anzeigen

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

AI in healthcare can save lives-or put them at risk. This episode explores guardrails, safety LLMs, regulation, and why generic AI controls fail in clinical settings.


Timestamps:

00:00 Introduction

01:26 What are LLM guardrails and why do they matter in healthcare

02:36 Why AI hallucinations are dangerous in medical settings

03:47 Why people still use chatbots for medical advice

05:13 Why generic AI safety tools fail in healthcare

06:16 Regulation pressure: US vs Europe

09:03 Guardrail frameworks: Guardrails AI, NeMo, Llama Guard

15:08 Safety LLMs and red teaming medical AI

22:17 Why healthcare AI needs application-specific testing

27:49 Shift-left AI safety and responsible design

32:44 The ELIZA effect

37:27 Practical advice for teams building healthcare AI


𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗳𝗼𝗿 𝗟𝗶𝘀𝘁𝗲𝗻𝗲𝗿𝘀 ►

Papers:

- Hakim et al. (2025) "The need for guardrails with large language models in pharmacovigilance."

- Meta's Llama Guard paper: "Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations" (arXiv:2312.06674)

- Ayala-Lauroba et al. (2024) "Enhancing Guardrails for Safe and Secure Healthcare AI" (arXiv:2409.17190)


Code and Models:

- Hakim et al. analysis code: https://github.com/jlpainter/llm-guardrails/

- Llama Guard: Available on Hugging Face (requires approval)

- gpt-oss-safeguard: https://huggingface.co/openai/gpt-oss-safeguard-20b (Apache 2.0)


Medical Ontologies:

- MedDRA (Medical Dictionary for Regulatory Activities): https://www.meddra.org/

- WHO Drug Dictionary: Via Uppsala Monitoring Centre


Regulatory Guidance:

- EMA AI Reflection Paper: https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence

- FDA AI Guidance: Available on FDA.gov



LISTEN ON ►

YouTube: https://youtu.be/IWoARQ0G7sg

Apple Podcasts: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175

Spotify: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t

Amazon Music: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast



FOLLOW ►

Website: https://www.johnsnowlabs.com/

LinkedIn: https://www.linkedin.com/company/johnsnowlabs/

Facebook: https://www.facebook.com/JohnSnowLabsInc/

X (Twitter): https://x.com/JohnSnowLabs


#HealthcareAI #AIGuardrails #MedicalAI #AISafety #AIEthics #HealthTech #AIRegulation #DigitalHealth #AIinMedicine #MLOps #AICompliance #AIHallucinations

Noch keine Rezensionen vorhanden