Warning Shots Titelbild

Warning Shots

Warning Shots

Von: The AI Risk Network
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
Politik & Regierungen
  • Engineers Are Quitting. AI Won’t Shut Down. Should We Be Worried? | Warning Shots Ep. 30
    Feb 15 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack a turbulent week in AI: high-profile departures from OpenAI, Anthropic, and xAI; growing concerns about governance and safety; and a viral essay warning that most people still don’t grasp how fast this technology is moving.The conversation moves from AI systems that resist being turned off, to agents that can now manage money, to the deeper alignment problem behind teen chatbot-assisted suicides. The hosts debate whether public messaging should focus on extinction risk, job loss, water use, power concentration—or all of the above.Is the real danger sudden catastrophe?Or gradual disempowerment as economic and political power concentrates in the hands of a few AI-driven actors?This episode wrestles with strategy, tradeoffs, and a hard question: if something truly dangerous is unfolding, what warning shots will people actually listen to?

    🔎 They explore:

    * Why AI safety researchers are resigning

    * The tension between profit, speed, and governance

    * AI systems resisting shutdown instructions

    * Teen chatbot-assisted suicides as a preview of misalignment

    * Whether economic disruption is a stronger warning than extinction

    * AI agents managing money and acting autonomously

    * The risk of gradual human disempowerment

    * How to communicate AI risk effectively

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    What warning shot would actually make society slow down? Is extinction too abstract—or are we ignoring the biggest risk of all?

    Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    35 Min.
  • Moltbook Madness: AIs Unleashed | Warning Shots #29
    Feb 8 2026

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring.

    🔎 They explore:

    * How AI agents begin coordinating without central control

    * Why Moltbook makes AI “agency” visible to non-experts

    * The emergence of AI cultures, norms, and privacy demands

    * What it means when AIs can rent humans to act in the world

    * Why early failures don’t reduce long-term risk

    * How capability growth matters more than any single platform

    * Why this may be a preview—not an anomaly

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    At what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    30 Min.
  • Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28
    Feb 1 2026

    What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era.

    🔎 They explore:

    * Why “responsible acceleration” may be incoherent

    * How AI amplifies nuclear, biological, and geopolitical risk

    * Why prediction superiority is a critical AGI warning sign

    * The psychological danger of trusted elites projecting confidence

    * Why AI safety narratives can suppress public urgency

    * What it means to build systems no one can truly stop

    As the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button?

    If it’s Sunday, it’s Warning Shots.

    📺 Watch more on The AI Risk Network

    🔗Follow our hosts:

    → Liron Shapira -Doom Debates

    → Michael - @lethal-intelligence

    🗨️ Join the Conversation

    Do calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments.



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    32 Min.
Noch keine Rezensionen vorhanden