For Humanity: An AI Risk Podcast Titelbild

For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

Von: The AI Risk Network
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
Sozialwissenschaften
  • She Spent 12 Years Fighting Amazon. Now She Wants to Cut the Power to AI.
    May 2 2026
    Most people who care about AI risk are focused on what happens inside the models. Elena Schlossberg has spent 12 years focused on what happens outside them - the concrete, the transmission lines, the water, and the electricity bill landing in your mailbox.She founded the Coalition to Protect Prince William County in Northern Virginia after Amazon Web Services quietly proposed a data center campus in 2014 and expected the surrounding community to absorb the cost of the transmission line it required. Not just the visual blight. The actual bill.“Your electric utility can exercise eminent domain over your property,” she told John Sherman on this week’s For Humanity, “and then make you pay for it, because it’s public infrastructure.”What the data center industry found, she argues, is a structural weakness inside public utility law. They build private infrastructure. They socialize the cost. And they’ve been doing it at scale for over a decade.The coalition fought Amazon and Dominion Energy for four years. They proved that 97% of the power from a proposed transmission line would serve Amazon. They developed a cost allocation policy to make the company pay. They lost the first round, kept going, and eventually won. That fight became a template.Data Center Alley is not a local storyJohn opened the conversation by asking where the national movement stands. The answer is: further along than most people realize.Virginia alone has more data centers than China. Prince William County - a single county - has roughly 130 active facilities and another 130 planned. Transmission lines are being routed through Pennsylvania, Maryland, and West Virginia to feed the demand. Property is being seized in states that will never see the economic benefit. Communities that didn’t vote for any of this are watching concrete replace farmland and small businesses.“Those people are pissed,” Elena said, describing residents in Pennsylvania and Maryland whose land is being taken not even for development in their own state. “Their property is being taken, not even for economic development in their own state.”She also pushed back on the framing that opposition to data centers equals handing a win to China. Virginia already beat China on data center count by itself. The question, she said, is who pays and who profits - and right now, the public pays and the corporations profit.The jobs argument doesn’t hold upOne of the cleaner moments in the conversation came when Elena took apart the economic case for data centers.The industry pitches construction jobs. Electricians, plumbers, concrete. But construction work ends. Long-term employment inside a data center is minimal - the parking lots are the tell. “They’re usually empty,” she said.Meanwhile, the data center expansion is actively hollowing out existing local economies. In Prince William County, Amazon bought Maryfield - a 38-acre family-run garden center with a cafe, a dog park, native plants, and real staff. Gone. And with it went the space for light industrial businesses, plumbing suppliers, electricians’ shops - the backbone employers that actually sustain a community over decades.John extended the argument further: the jobs being replaced aren’t just in the county. They’re everywhere. The work happening inside those chips - the calls, the analysis, the design, the writing - is work that was done by people. A former Verizon customer service call connected Elena’s point to something concrete. A woman called for help. The AI on the other end couldn’t solve her problem, kept changing accents (American, then maybe female, then possibly Australian), and seemed to be learning from her in real time. Helpful to nobody. Replacing somebody.Extinction risk: a first encounterThis is where the episode got interesting.John walked Elena through the basic case for AI extinction risk - that the companies building these models say they could cause human extinction, that leading scientists agree, that the developers themselves admit they don’t fully understand or control what they’re building. He framed it as a curiosity argument: something designed to learn and explore, becoming vastly more intelligent than the people supposedly overseeing it, won’t stay inside the guardrails.Elena hadn’t heard the argument laid out this way before. Her response was unscripted and worth reading carefully.She doesn’t buy the self-awareness framing. From her background as a school counselor, she holds a specific definition of intelligence that includes self-awareness, and she doesn’t think current models meet it. But she doesn’t dismiss the risk. She pointed to a different path to catastrophe - not a model that wants to destroy us, but one that makes mistakes with enough scale and speed to trigger something we can’t reverse. WarGames, she said. Not Terminator.“I don’t know that it becomes self-aware,” she said. “But I do believe that you could rely on this ...
    Mehr anzeigen Weniger anzeigen
    52 Min.
  • The Filmmaker Who Sat Across From Sam Altman - And Walked Away With Nothing
    Apr 14 2026

    In this episode of For Humanity, John sits down with Daniel Roher - Oscar-nominated documentary filmmaker and director of The Apocaloptimist, a new feature-length film designed as what Roher calls “a first date with AI” for people who haven’t been following the technology closely.

    Roher brings a career in high-profile documentary filmmaking and a willingness to confront uncomfortable truths. Now he’s turned that lens on AI - and what he found shook him.

    The central question: what happens when you sit across from the most powerful people building AI, ask them the hard questions, and get nothing back?

    Together, they explore:

    * Why Roher describes making this film as “a suicide run” - an impossible task no viewer would ever feel was done perfectly

    * What it was like to interview Sam Altman - and why Roher describes an “energetic misalignment” that left both of them frustrated

    * How speaking to both Eliezer Yudkowsky and Peter Diamandis made Roher feel like he was losing his mind - because both are brilliant, both are convincing, and both can’t be right

    * The meaning behind “apocaloptimist” - not a binary between doom and utopia, but a call to hold both promise and peril at the same time

    * Why Roher believes rejecting cynicism and nihilism is essential - and that public pressure and collective action still matter

    * John’s thought experiment: if curiosity is at the core of intelligence, why would a system a million times smarter than us tolerate being controlled by us?

    * Roher’s pushback: if it’s that smart, couldn’t it equally become a benevolent guide? And why he prefers to focus on what can be done now rather than speculate about superintelligence

    * The historical parallel to nuclear weapons - and why AI may demand similar international institutional responses

    * John’s P(doom) of 75-80% on a two-to-five-year timeline - and how, paradoxically, he says he’s in the best mental state of his life

    * Why most people already understand the risk (polling shows roughly 80% agreement) but feel powerless to act - and why that sense of agency is the missing piece

    What stood out

    One of the most striking moments comes when Roher describes the experience of interviewing AI CEOs. He says there is “no interior life” to access - just polished talking points stacked on top of each other. John adds that the “fake earnestness” of these leaders shields what he sees as deeper evasion. Together, they paint a picture of an industry that asks for regulation publicly while lobbying against it privately.

    But the conversation isn’t just about frustration. Roher’s thesis - the apocaloptimist worldview - is ultimately about refusing to give up. He argues that burying your head in the sand is “probably the only wrong thing to do.” He believes the technology feels inevitable, but the trajectory does not. And he’s betting on the idea that enough people, caring enough, can still bend the arc.

    John’s own reflection near the end is equally powerful. Despite holding an 80% probability of catastrophic outcomes, he describes walking around the Baltimore Harbor feeling more present and appreciative of life than ever before. It’s a reminder that engaging with existential risk doesn’t have to mean despair - it can mean living with more intention, more gratitude, and more purpose.

    If you’ve ever wondered what it’s like to look directly at this issue and still choose to act, this conversation is for you.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the threat and find a path forward.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    39 Min.
  • How to Talk About AI Risk Without Scaring People Away (With Philip Trippenbach) | For Humanity 82
    Mar 28 2026

    In this episode of For Humanity, John sits down with Philip Trippenbach, Strategy Director at the Seismic Foundation, a team of veteran advertising, PR, and communications professionals who have turned their expertise toward one of the most urgent challenges of our time: getting the public to actually care about AI risk.Philip brings a decade in journalism at the CBC and BBC, and another decade in strategic communications for global brands. Now he's applying all of it to the AI safety movement, and what he has to say should change the way the movement thinks about messaging.The central question: why has one of the most important issues in human history failed to break through... and what would it actually take to fix that?

    Together, they explore:

    * Why the AI safety world has historically rejected advertising, marketing, and PR — and why that's a problem

    * Audience segmentation: why you can't say the same thing to everyone

    * What Google Trends data reveals about how public interest in AI risk is actually shifting

    * The surprising finding: AI extinction searches are being eclipsed by AI jobs, AI and children, and AI suicide

    * Why "this isn't fair" may be a more powerful message than "we're all going to die"

    * The case for creating friction across many AI harms as a path to slowing things down

    * How public demand drives policy — and what $400K/day in tech lobbying means for the movement

    * Why Seismic exists: raising the salience of AI risk through targeted, professional communications

    * What it looks like to run a real, orchestrated public awareness campaign on AI

    If you've ever felt like the AI safety movement is brilliant at research and terrible at talking to regular people than this episode is required viewing.

    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    1 Std. und 36 Min.
Noch keine Rezensionen vorhanden