• We’re Racing Toward AI We Can’t Control | For Humanity #79
    Feb 14 2026

    In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.

    Together, they explore:

    * Why AI extinction risk is real

    * Why research alone won’t save us

    * The dangers of the AI chip supply chain race

    * Job displacement and political blind spots

    * Alignment skepticism

    * Whether treaties can work

    * What gives David hope in 2026

    If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.

    🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 10 Min.
  • Can't We Just Pause AI? | For Humanity #78
    Jan 31 2026

    What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.

    Together, they explore:

    * Why AI safety must address real, present-day harms, not just abstract futures

    * How burnout and mental resilience shape long-term movement success

    * Why job displacement, youth harm, and data centers are political leverage points

    * The limits of regulation without enforcement and public pressure

    * How tipping points in public opinion actually form

    * Why protests still matter—even when they’re small

    * What it will take to build a global, durable AI safety movement

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 14 Min.
  • Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77
    Jan 17 2026

    What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.

    Together, they explore:

    * Why laws, treaties, and regulations repeatedly fail against powerful industries

    * How Big AI is following Big Tobacco’s exact regulatory playbook

    * Why public outrage rarely translates into effective policy

    * How companies neutralize enforcement without breaking the law

    * Why third-party standards may matter more than legislation

    * How local resistance, liability, and investor pressure can change behavior

    * Why making unsafe AI bad for business is the only strategy with teeth

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 24 Min.
  • What We Lose When AI Makes Choices for Us | For Humanity #76
    Dec 20 2025

    What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.

    Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.

    Together, they explore:

    * Why AI threatens near-term human agency more than long-term sci-fi extinction

    * How Google Maps offers a chilling preview of AI’s effect on the human brain

    * The difference between fast-thinking and slow-thinking — and why AI exploits it

    * Why persuasive AI may outperform humans politically and psychologically

    * How profit incentives, not intelligence, are driving the most dangerous outcomes

    * Why focusing only on extinction risk alienates the public — and weakens AI safety efforts

    👉 Follow More of Jacob Ward’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 20 Min.
  • The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75
    Dec 6 2025

    In this episode of For Humanity, John Sherman sits down with Congressman Bill Foster — the only PhD scientist in Congress, a former Fermilab physicist, and one of the few lawmakers deeply engaged with advanced AI risks. Together, they dive into a wide-ranging conversation about the accelerating capabilities of AI, the systemic vulnerabilities inside Congress, and why the next few years may determine the fate of our species.

    Foster unpacks why AI risk mirrors nuclear risk in scale, how interpretability is collapsing as models evolve, why Congress is structurally incapable of responding fast enough, and how geopolitical pressures distort every conversation on safety. They also explore the looming financial bubble around AI, the coming energy crunch from massive data centers, and the emerging threat of anonymous encrypted compute — a pathway that could enable rogue actors or rogue AIs to operate undetected.

    If you want a deeper understanding of how AI intersects with power, geopolitics, compute, regulation, and existential risk, this conversation is essential.

    Together, they explore:

    * • The real risks emerging from today’s AI systems — and what’s coming next

    * Why Congress is unprepared for AGI-level threats

    * How compute verification could become humanity’s safety net

    * Why data centers may reshape energy, economics, and local politics

    * How scientific literacy in government could redefine AI governance

    👉 Follow More of Congressman Foster’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 10 Min.
  • AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74
    Nov 22 2025

    In this episode of For Humanity, John sits down with Liv Boeree — poker champion, systems thinker, and longtime AI risk advocate — for a candid conversation about where we truly stand in the race toward advanced AI. Liv breaks down why public understanding of superintelligence is so uneven, how misaligned incentives shape the entire ecosystem, and why issues like surveillance, culture, and gender dynamics matter more than people realize.

    They explore the emotional realities of working on existential risk, the impact of doomscrolling, and how mindset and intuition keep people grounded in such turbulent times. The result is a clear, grounded, and surprisingly hopeful look at the future of technology, power, and responsibility. If you’re passionate about understanding AI’s real impacts (today and tomorrow), this is a must-watch.

    Together, they explore:

    * The real risks we face from AI — today and in the coming years

    * Why public understanding of superintelligence is so fractured

    * How incentives, competition, and culture misalign technology with human flourishing

    * What poker teaches us about deception, risk, and reading motives

    * The role of women, intuition, and “mama bear energy” in the AI safety movement

    👉 Follow More of Liv Boeree’s Work:

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 18 Min.
  • AI Safety on the Frontlines | For Humanity #73
    Nov 8 2025

    In this episode of For Humanity, host John Sherman speaks with Esben Kran, one of the leading figures in the for-profit AI safety movement, joining live from Ukraine — where he’s exploring the intersection of AI safety, autonomous drones, and the defense tech boom.

    🔎 They discuss:

    * The rise of for-profit AI safety startups and why technology must lead regulation.

    * How Ukraine’s drone industry became the frontline of autonomous warfare.

    * What happens when AI gains control — and how we might still shut it down.

    * The chilling concept of a global “AI kill chain” and what humanity must do now.

    Esben also shares insights from companies like Lucid Computing and Workshop Labs, the growing global coordination challenges, and why the next AI safety breakthroughs may not come from labs in Berkeley — but from battlefields and builders abroad.

    🔗 Subscribe for more conversations about AI risk, ethics, and the fight to build a safe future for humanity.

    📺 Watch more episodes

    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    56 Min.
  • Stuart Russell: “AI CEO Told Me Chernobyl-Level AI Event Might Be Our Only Hope” | For Humanity #72
    Oct 25 2025

    Let’s face it: in the long run, there’s either going to be safe AI or no AI. There is no future with powerful unsafe AI and human beings. In this episode of For Humanity, John Sherman speaks with Professor Stuart Russell — one of the world’s foremost AI pioneers and co-author of Artificial Intelligence: A Modern Approach — about the terrifying honesty of today’s AI leaders.

    Russell reveals that the CEO of a major AI company told him his best hope for a good future is a “Chernobyl-scale AI disaster.” Yes — one of the people building advanced AI believes only a catastrophic warning shot could wake up the world in time. John and Stuart dive deep into the psychology, politics, and incentives driving this suicidal race toward AGI.

    They discuss:

    * Why even AI insiders are losing faith in control

    * What a “Chernobyl moment” could actually look like

    * Why regulation isn’t anti-innovation — it’s survival

    * The myth that America is “allergic” to AI rules

    * How liability, accountability, and provable safety could still save us

    * Whether we can ever truly coexist with a superintelligence

    This is one of the most urgent conversations ever hosted on For Humanity. If you care about your kids’ future — or humanity’s — don’t miss this one.

    🎙️ About For Humanity A podcast from the AI Risk Network, hosted by John Sherman, making AI extinction risk kitchen-table conversation on every street.

    📺 Subscribe for weekly conversations with leading scientists, policymakers, and ethicists confronting the AI extinction threat.

    #AIRisk #ForHumanity #StuartRussell #AIEthics #AIExtinction #AIGovernance #ArtificialIntelligence #AIDisaster #GuardRailNow



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 33 Min.