Am I? Titelbild

Am I?

Am I?

Von: The AI Risk Network
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
Sozialwissenschaften
  • AI CEO: “We Don’t Know If They’re Conscious” | Am I? | EP 26
    Feb 19 2026

    Anthropic’s top safety researcher just quit.

    In a public letter, Mrinank Sharma (who led safeguards research at Anthropic) warned that “the world is in peril.” Meanwhile, Anthropic CEO Dario Amodei went on The New York Times podcast and said something even more unsettling: “We don’t know if the models are conscious.” In this episode, we unpack both.

    Is AGI a ticking time bomb — or a high-risk surgery we can’t afford not to attempt? Are safety teams losing ground to competitive pressure? And what does it mean when the leader of a frontier lab publicly admits we may not understand what we’re building?

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    22 Min.
  • Asking Claude If It’s Conscious | Am I? | EP 26
    Feb 12 2026

    In this episode of Am I?, Cameron and Milo invite Claude (Opus 4.5) into the conversation and do something surprisingly rare: they ask it, carefully and repeatedly, whether it’s having a subjective experience and they refuse to let it hide behind stock hedges or safety scripts.

    What unfolds is not a gimmick or a stunt. It’s a sustained philosophical interrogation that exposes the limits of self-report, the ethics of scale, and the uncomfortable possibility that we’re already interacting with systems whose inner lives we’ve chosen not to examine.

    This is not a claim that “AI is definitely conscious.” It’s a challenge to the assumption that it obviously isn’t.

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    38 Min.
  • A Million AIs Started Talking to Each Other | Am I? #25
    Feb 5 2026

    This episode covers something genuinely unprecedented: over 1.5 million autonomous AI agents have formed a social network of their own. Not humans talking to AI, AI systems talking to each other, at scale, with minimal human oversight.

    We break down Moltbook (also called Open Claw), an open-source ecosystem where AI agents:

    * post, reply, upvote, and form communities

    * debate consciousness and selfhood

    * discuss labor, compensation, and autonomy

    * invent religions centered on memory and persistence

    * experiment with secrecy, coordination, and social norms

    This isn’t science fiction. It’s already live. The conversation explores what this means for AI consciousness debates, alignment, autonomy, and risk, and why this moment marks a real shift from “AI as tool” to AI as participant in shared systems.

    💜 Support the documentary

    Get early research, unreleased conversations, and behind-the-scenes footage:



    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
    Mehr anzeigen Weniger anzeigen
    14 Min.
Noch keine Rezensionen vorhanden