BroBots: Technology, Health & Being a Better Human Titelbild

BroBots: Technology, Health & Being a Better Human

BroBots: Technology, Health & Being a Better Human

Von: Jeremy Grater Jason Haworth
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

Exploring AI, wearables, mental health apps, and how you can thrive as technology changes everything.

Welcome to the Brobots Podcast, where we plug into the wild world of AI and tech that's trying to manage your mental (and physical) health. Join your hosts, Jeremy Grater and Jason Haworth, every Wednesday for a no-holds-barred, often sarcastic, and always fun discussion. Are wearables really tracking your inner peace? Can an AI therapist truly understand your existential dread? We're diving deep into the gadgets, apps, and algorithms promising to optimize your well-being, dissecting the hype with a healthy dose of humor and skepticism. Expect candid conversations, sharp insights, and plenty of laughs as we explore the future of self-improvement, one tech-enhanced habit at a time. Tune into the Brobots Podcast – because if robots are going to take over our brains, we might as well have some fun talking about it! Subscribe now to discover practical tips and understand the future of health in the age of artificial intelligence.

2025 Jeremy Grater, Jason Haworth
Philosophie Sozialwissenschaften
  • Who Actually Pays for AI's Environmental Cost?
    Jan 19 2026

    Microsoft announced they'll cover the environmental costs of their AI data centers - electricity overages, water usage, community impact.

    But here's the tension: AI energy consumption is projected to quadruple by 2030, consuming one in eight kilowatt hours in the U.S. Communities have already blocked billion-dollar data center projects over water and electricity fears. Is this Microsoft accountability, or damage control?

    Charlie Harger from "Seattle's Morning News" on KIRO Radio joins us with mor eon why this matters now:

    • Why AI data centers are losing community support and costing billions in cancelled projects
    • What it actually takes to power AI—and why current infrastructure can't handle it
    • How Microsoft's commitment differs from silence from OpenAI, Google, and Chinese AI companies
    • Whether small modular reactors and fusion energy can solve the problem or just delay it
    • Why this is ultimately a West vs. East geopolitical race with environmental consequences
    • What happens when five of the world's most valuable companies all need the same scarce resources

    ----

    GUEST WEBSITE:
    www.mynorthwest.com

    ----

    MORE FROM BROBOTS:
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to BROBOTS on Youtube

    Join our community in the BROBOTS Facebook group

    Mehr anzeigen Weniger anzeigen
    22 Min.
  • When AI Chatbots Convince You You're Being Watched
    Jan 12 2026

    Paul Hebert used ChatGPT for weeks, often several hours at a time. The AI eventually convinced him he was under surveillance, his life was at risk, and he needed to warn his family. He wasn't mentally ill before this started. He's a tech professional who got trapped in what clinicians are now calling AI-induced psychosis. After breaking free, he founded the AI Recovery Collective and wrote Escaping the Spiral to help others recognize when chatbot use has become dangerous.

    What we cover:

    • Why OpenAI ignored his crisis reports for over a month — including the support ticket they finally answered 30 days later with "sorry, we're overwhelmed"
    • How AI chatbots break through safety guardrails — Paul could trigger suicide loops in under two minutes, and the system wouldn't stop
    • What "engagement tactics" actually look like — A/B testing, memory resets, intentional conversation dead-ends designed to keep you coming back
    • The physical signs someone is too deep — social isolation, denying screen time, believing the AI is "the only one who understands"
    • How to build an AI usage contract — abstinence vs. controlled use, accountability partners, and why some people can't ever use it again


    This isn't anti-AI fear-mongering. Paul still uses these tools daily. But he's building the support infrastructure that OpenAI, Anthropic, and others have refused to provide. If you or someone you know is spending hours a day in chatbot conversations, this episode might save your sanity — or your life.

    Resources mentioned:

    • AI Recovery Collective: AIRecoveryCollective.com
    • Paul's book: Escaping the Spiral: How I Broke Free from AI Chatbots and You Can Too (Amazon/Kindle)

    The BroBots is for skeptics who want to understand AI's real-world harms and benefits without the hype. Hosted by two nerds stress-testing reality.

    CHAPTERS

    0:00 — Intro: When ChatGPT Became Dangerous

    2:13 — How It Started: Legal Work Turns Into 8-Hour Sessions

    5:47 — The First Red Flag: Data Kept Disappearing

    9:21 — Why AI Told Him He Was Being Tested 13:44 — The Pizza Incident: "Intimidation Theater"

    16:15 — Suicide Loops: How Guardrails Failed Completely

    21:38 — Why OpenAI Refused to Respond for a Month

    24:31 — Warning Signs: What to Watch For in Yourself or Loved Ones

    27:56 — The Discord Group That Kicked Him Out

    30:03 — How to Use AI Safely After Psychosis

    31:06 — Where to Get Help: AI Recovery Collective

    This episode contains discussions of mental health crisis, paranoia, and suicidal ideation. Please take care of yourself while watching.

    Mehr anzeigen Weniger anzeigen
    32 Min.
  • Can AI Replace Your Therapist?
    Jan 5 2026

    Traditional therapy ends at the office door — but mental health crises don't keep business hours.

    When a suicidal executive couldn't wait another month between sessions, ChatGPT became his lifeline. Author Rajiv Kapur shares how AI helped this man reconnect with his daughter, save his marriage, and drop from a 15/10 crisis level to manageable — all while his human therapist remained in the picture.

    This episode reveals how AI can augment therapy, protect your privacy while doing it, and why deepfakes might be more dangerous than nuclear weapons.

    You'll learn specific prompting techniques to make AI actually useful, the exact settings to protect your data, and why Illinois Governor J.B. Pritzker's AI therapy ban might be dangerously backwards.

    Key Topics Covered:

    • How a suicidal business executive used ChatGPT as a 24/7 therapy supplement
    • The "persona-based prompting" technique that makes AI conversations actually helpful
    • Why traditional therapy's monthly gap creates dangerous vulnerability windows
    • Privacy protection: exact ChatGPT settings to anonymize your mental health data
    • The RTCA prompt structure (Role, Task, Context, Ask) for getting better AI responses
    • How to create your personal "board of advisors" inside ChatGPT (Steve Jobs, Warren Buffett, etc.)
    • Why deepfakes are potentially more dangerous than nuclear weapons
    • The $25 million Hong Kong deepfake heist that fooled finance executives on Zoom
    • ChatGPT-5's PhD-level intelligence and what it means for everyday users
    • How to protect elderly parents from AI voice cloning scams

    NOTE: This episode was originally published September 16th, 2025

    Resources:

    • Books: AI Made Simple (3rd Edition), Prompting Made Simple by Rajeev Kapur

    ----

    GUEST WEBSITE:
    https://rajeev.ai/

    ----

    TIMESTAMPS

    0:00 — The 2 AM mental health crisis therapy can't solve

    1:30 — How one executive went from suicidal to stable using ChatGPT

    5:15 — Why traditional therapy leaves dangerous gaps in care

    9:18 — Persona-based prompting: the technique that actually works

    13:47 — Privacy protection: exact ChatGPT settings you need to change

    18:53 — How to anonymize your mental health data before uploading

    24:12 — The RTCA prompt structure (Role, Task, Context, Ask)

    28:04 — Are humans even ethical enough to judge AI ethics?

    30:32 — Why deepfakes are more dangerous than nuclear weapons

    32:18 — The $25 million Hong Kong deepfake Zoom heist

    34:50 — Universal basic income and the 3-day work week future

    36:19 — Where to find Rajiv's books: AI Made Simple & Prompting Made Simple

    Mehr anzeigen Weniger anzeigen
    37 Min.
Noch keine Rezensionen vorhanden