Professor Insight Podcast - AI, Science and Business Titelbild

Professor Insight Podcast - AI, Science and Business

Professor Insight Podcast - AI, Science and Business

Von: Billy Sung
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

The Professor Insight Podcast is your TLDR or ”too long, didn’t read” guide to the frontiers of artificial intelligence, neuroscience, and technology that are reshaping business today. Curated by Professor Billy and fully powered by AI, we unpack the most intriguing news, novel research findings, and real-world applications, keeping you informed and ahead of the curve. Perfect for tech-savvy entrepreneurs, business leaders, and inquisitive minds, each episode equips you with actionable insights and fascinating perspectives. Tune in to discover how breakthroughs in AI and science apply to the world of business.Copyright 2025 All rights reserved. Marketing & Vertrieb Ökonomie
  • EP30 - How Just 250 Files Can Poison a Large Language Model (LLM)
    Oct 24 2025

    In this episode of the Professor Insight Podcast, we examine one of the most striking new studies in AI security, titled Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples. Conducted by researchers from the UK AI Security Institute, Anthropic, the Alan Turing Institute, and the University of Oxford, this study challenges a long-standing assumption about how large language models can be compromised. The finding is as unsettling as it is important: a handful of poisoned samples can corrupt a model trained on billions of tokens.

    Listeners will hear how the research team ran some of the largest pretraining poisoning experiments ever attempted, using models ranging from 600 million to 13 billion parameters. The experiments revealed that as few as 250 manipulated documents could reliably implant hidden “backdoors,” regardless of model size or dataset scale. The episode explains how these backdoors work, why they persist even through fine-tuning, and what it means for AI safety practices that rely on filtering or data scaling to defend against attack.

    This episode matters because it highlights a quiet but critical shift in how we must think about AI security. If the number of poisoned examples required for an attack remains constant as models grow, then scaling up will not make systems safer. Instead, the risks expand with the data itself. For anyone working in AI development, governance, or policy, this conversation offers a grounded look at how small vulnerabilities can have large consequences, and what steps the research community is beginning to take to close that gap.

    Mehr anzeigen Weniger anzeigen
    24 Min.
  • EP29 - Why AI Hallucinates: Insights from OpenAI and Georgia Tech
    Oct 14 2025

    Hallucinations are a daily reality in the AI and LLM tools many of us use. In this episode of the Professor Insight Podcast, we explore new research from OpenAI and Georgia Tech titled “Why Language Models Hallucinate.” The findings shed light on why large language models often produce confident but false statements, and why this problem persists even in the most advanced systems.

    Listeners will discover how hallucinations begin during pretraining, why they survive post-training, and how current benchmarks actually encourage models to guess instead of admit uncertainty. We’ll walk through real examples, the statistical roots of the issue, and the socio-technical traps created by the way we evaluate AI today. The episode also highlights the bold proposal from researchers: to redesign scoring systems so that honesty is rewarded, not punished.

    This conversation matters because hallucinations aren’t just harmless quirks. They can shape trust, decision-making, and even safety in classrooms, businesses, and healthcare systems. By unpacking the causes and potential fixes, this episode offers listeners a clearer understanding of how we might steer AI toward becoming not just more capable, but more trustworthy.

    Mehr anzeigen Weniger anzeigen
    22 Min.
  • EP28 - Unlocking the Agentic Era: Google’s New AI ROI Report Explained
    Oct 7 2025

    Artificial intelligence is no longer just a support tool in business, it is becoming a core driver of growth and efficiency. In this episode, we explore Google’s new report, The ROI of AI 2025: How Agents Are Unlocking the Next Wave of AI-Driven Business Value. The findings reveal a major shift from asking whether to use AI to focusing on how to scale it effectively. With companies now moving into the agentic era, AI agents are stepping up to perform real work and deliver measurable impact.

    Listeners will hear surprising statistics and insights from the report. For example, 88 percent of early adopters are already seeing ROI from generative AI, and more than half of executives using AI have put agents into production. The report highlights where the biggest returns are happening, from productivity gains and customer experience improvements to marketing and security. You will also hear how companies are deploying agents across different industries, what role executive sponsorship plays in success, and why data privacy and system integration remain top concerns.

    This episode matters because it shows the practical reality of AI adoption, not just the theory. Businesses that move quickly are pulling ahead, and the lessons from early adopters provide a clear picture of what it takes to see real value. Whether you are in leadership, strategy, or operations, understanding how AI agents are being used today can help you make better decisions about where to invest and how to prepare for the next stage of digital transformation.

    Mehr anzeigen Weniger anzeigen
    29 Min.
Noch keine Rezensionen vorhanden