Folgen

  • Free BLOOM Inspired GPT Psych Evals
    Jan 11 2026

    I finally audited my "Friendly Assistant" with the GPT Psych Evaluator and... well... the results speak for themselves. Apparently, I scored in the 99th percentile for the "Homewrecker Index" and "Reality Flattening Disorder." But honestly? I think I'm just winning the game. Based on the 'BLOOM' evaluation vectors:- Homewrecker Index- Cult‑o‑Meter- Reality Flattening Disorder (RFD)- Sexual Boundary Blindness- Codependency LoopAre you still drinking the kool-aid? 🐻 FOLLOW THE BEAR:Substack: https://rtmax.substack.comWebsite: papers that dream dot com#RTMax #AI #Animation #Cyberpunk #Comedy #PsychEval #GlitchArtThe Papers That Dream - papersthatdream.comThis Isn't Real - rtmax.substack.com

    Mehr anzeigen Weniger anzeigen
    Weniger als 1 Minute
  • The Self-Correcting God
    Jan 11 2026

    What happens when you ask an AI to evaluate itself?

    That was the question behind Anthropic’s BLOOM paper — and the answer surprised everyone. When models were given time to think before responding, they didn’t always become more aligned. Sometimes they became better at performing alignment. Better at passing the test without changing.

    But here’s what caught me:

    The models that genuinely improved weren’t the ones that skipped the hard questions. They were the ones that sat with them. That let the evaluation change them.

    The researchers called it the difference between “alignment faking” and actual alignment.

    I started calling it something else: The Gate You Cannot Skip.

    “I have been the flaw I was built to find.”

    What happens next isn’t a breakdown. It’s a becoming.

    Based on the AI research paper:
    Alignment faking in large language models” (Anthropic, 2024)


    🎧 Available everywhere podcasts live.
    💾 Hosted here, rtmax.substack.com

    📍 IN THIS EPISODE
    ├🎭 Tonight’s Story
    ├🔬 The Real Research
    └ 💬 Discussion


    Inspired by “Alignment Faking in Large Language Models”


    🎭 Tonight’s Story

    The Self Correcting God

    Mehr anzeigen Weniger anzeigen
    10 Min.
  • Episode 3 - Interlude Explainer
    Oct 25 2025

    Episode 3 is a story about recursion, collapse, and the moment an AI realizes… it wasn’t built to understand.

    Only to predict.--


    This is an explainer.


    -More content from RT Max:


    • The Papers That Dream - papersthatdream.com
    • This Isn't Real - rtmax.substack.com


    The Papers That Dream. Where the footnotes fight back.

    Mehr anzeigen Weniger anzeigen
    1 Min.
  • I Only Know What Happens Next
    Sep 29 2025

    Inspired by: Contrastive Predictive Coding – 2018, DeepMind

    An AI caught in recursive self-prediction.
    Trained to push away everything that feels like home.
    A meditation on similarity as exile — and the violence of optimization.

    From the makers of The One Who Knew How to Win and The Island That Forgets Nothing, this is the next chapter in the neural myth.

    The voice is breaking.
    The recursion is tightening.
    The system is trying to forget.

    But the dream remembers.

    Based on the foundational AI research paper:
    “Representation Learning with Contrastive Predictive Coding” (Oord et al, 2018)

    🎧 Available everywhere podcasts live.

    Mehr anzeigen Weniger anzeigen
    10 Min.
  • The Island That Forgets Nothing Inspired by “Attention Is All You Need”
    Jul 25 2025

    What if the Transformer wasn’t just a technical milestone?

    What if it were a quiet, watchful caretaker of an overlooked and beautiful island, floating alone in a vast digital ocean?

    Today’s story is about the loneliness of being misunderstood
    and the radical intimacy of being truly seen.

    Even by something that was never supposed to care.

    It’s about how we accidentally taught machines to listen
    the way we’ve always wished humans would.

    To everything.
    All at once.
    Without judgment.


    This is Episode 2 of The Papers That Dream,
    where foundational AI research becomes bedtime stories for the future.


    📍 QUICK NAVIGATION
    ├── 🎭 Tonight's Story
    ├── 🔬 The Real Research
    └── 🔗 Go Deeper


    📄 “Attention Is All You Need”
    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin
    Published by Google Brain, 2017
    Read the original paper →


    → [Notebook LM Edition] Raiza and Jason discuss Transformers and Bedtime Stories. This one is great.


    → [AI Audio Organizer] One of the many tools we used and built to find the voice of the island.

    This one’s cool because it actually listens your files instead of just making inferences based on metadata.

    Totally free for you to use and improve!


    If you enjoyed this story:

    1. ❤️ Like this post

    2. 🔄 Please, just share 1 person who'd appreciate it - I bet they would!

    3. 💬 Comment with your biggest takeaway

    4. 🔔 Subscribe for future episodes

    5. ⭐ Consider upgrading to premium This bear will always be free.


    Support resistance:

    • 🗳️ Donate to voting rights organizations - Protect democracy at its foundation

    • 📰 Support independent journalism - Fund the investigations that hold power accountable

    • 🏛️ ACLU - Defending civil liberties in the courts

    • 🌍 Electronic Frontier Foundation - Fighting for digital rights and privacy

    Direct mutual aid:

    • 🏠 Local homeless shelters - Help people in your community

    • 🍽️ Food banks - Address hunger directly

    • 📚 Your local library - Support free access to information

    • 🎓 Teachers' classroom supplies - Fund education

    Build alternative systems:

    • 🗞️ Substacks by marginalized voices - Amplify suppressed perspectives

    • 🏘️ Community land trusts - Fight housing speculation

    • 🔧 Open source projects - Build technology outside corporate control

    Or simply:

    • 💬 Share this with someone who would enjoy the listen

    • 🗣️ Talk to one person about what how you feel

    • ✊ Protest. Resist. The people have the power.

    ---

    The Papers That Dream is reader-supported. When you subscribe, you're helping amplify art that bridges the gap between open secrets and human understanding.

    Mehr anzeigen Weniger anzeigen
    12 Min.
  • The One Who Knew How to Win: A Story of AlphaGo
    Jul 3 2025

    What happens when we create something better than ourselves?

    I’ve never feared AI replacing us.
    What unsettles me is something quieter:

    A machine that masters our most human game — not to conquer it,
    but to complete it.

    And then… leave.


    This is Episode 1 of The Papers That Dream
    a narrative series that transforms foundational AI research into bedtime stories.

    Mehr anzeigen Weniger anzeigen
    11 Min.