Health and Explainable AI Podcast Titelbild

Health and Explainable AI Podcast

Health and Explainable AI Podcast

Von: Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

The Health and Explainable AI podcast is a collaborative initiative between the Health and Explainable AI (HexAI) Research Lab in the Department of Health Information Management at the School of Health and Rehabilitation Sciences, and the Computational Pathology and AI Center of Excellence (CPACE), at the University of Pittsburgh School of Medicine. Led by Ahmad P. Tafti, Hooman Rashidi and Liron Pantanowitz, the podcast explores the transformative integration of responsible and explainable artificial intelligence into health informatics, clinical decision-making, and computational medicine.Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
  • Ekaterina Kldiashvili from the Tbilisi Medical Academy on Responsible Uses of AI, Medical Education and Inter-University Collaboration
    Feb 7 2026

    Ekaterina Kldiashvili, Vice Rector for Research at Petre Shotadze Tbilisi Medical Academy, and Pitt’s HexAI podcast host, Jordan Gass-Pooré, discuss public health, the incorporation of AI into healthcare, responsible uses of AI, medical education and inter-university collaboration.

    Ekaterina and Jordan explore opportunities and concerns surrounding commercial AI applications, noting that while AI can improve healthcare efficiency, it must support clinical reasoning rather than replace it. They cover the Tbilisi Medical Academy’s work on responsible AI usage, particularly in educating providers and patients, demonstrating how AI-enhanced text and visuals can significantly improve patient understanding and follow-up rates. They also touch on challenges associated with the use of AI in non-English languages like Georgian and delve into advances in computational genomics and rapid molecular diagnostics. Looking ahead, they discuss the strengthening ties between the University of Pittsburgh and the Tbilisi Medical Academy through knowledge sharing and faculty training and broadly discuss inter-university collaboration and the idea of seeing students investigate how different cultures and communities trust and accept AI in healthcare settings.

    Mehr anzeigen Weniger anzeigen
    28 Min.
  • Richard Bonneau from Genentech on Drug Discovery, Computational Sciences and Machine Learning
    Dec 18 2025

    Richard Bonneau, Vice President of Machine Learning for Drug Discovery at Genentech and Roche, provides Pitt’s HexAI podcast host, Jordan Gass-Pooré, with an insider view on how his team is fundamentally changing and accelerating how new drug candidate molecules are designed, predicted, and optimized.

    Geared for students in computational sciences and hybrid STEM fields, the episode introduces listeners to uses of AI and ML in molecular design, the biomolecular structure and structure-function relationships that underpin drug discovery, and how distinct teams at Genentech work together through an integrated computational system.

    Richard and Jordan use the opportunity to touch on how advances in the molecule design domain can inspire and inform advances in computational pathology and laboratory medicine. Richard also delves into the critical role of Explainable AI (XAI), interpretability, and error estimation in the drug design-prototype-test cycle, and provides advice on domain knowledge and skills needed today by students interested in joining teams like his at Genentech and Roche.

    Mehr anzeigen Weniger anzeigen
    30 Min.
  • Dennis Wei from IBM on In-Context Explainability and the Future of Trustworthy AI
    Nov 19 2025

    Dennis Wei, Senior Research Scientist at IBM specializing in human-centered trustworthy AI, speaks with Pitt’s HexAI podcast host, Jordan Gass-Pooré, about his work focusing on trustworthy machine learning, including interpretability of machine learning models, algorithmic fairness, robustness, causal inference and graphical models.


    Concentrating on explainable AI, they speak in depth about the explainability of Large Language Models (LLMs), the field of in-context explainability and IBM’s new In-Context Explainability 360 (ICX360) toolkit. They explore research project ideas for students and touch on the personalization of explainability outputs for different users and on leveraging explainability to help guide and optimize LLM reasoning. They also discuss IBM’s interest in collaborating with university labs around explainable AI in healthcare and on related work at IBM looking at the steerability of LLMs and combining explainability and steerability to evaluate model modifications.


    This episode provides a deep dive into explainable AI, exploring how the field's cutting-edge research is contributing to more trustworthy applications of AI in healthcare. The discussion also highlights emerging research directions ideal for stimulating new academic projects and university-industry collaborations.


    Guest profile: https://research.ibm.com/people/dennis-wei

    ICX360 Toolkit: https://github.com/IBM/ICX360

    Mehr anzeigen Weniger anzeigen
    25 Min.
Noch keine Rezensionen vorhanden