• There Is No Alternative: How “Inevitable AI” Keeps the Bubble Inflating
    Feb 4 2026

    This week, Kimberly Becker and Jessica Parker dig into the “AI bubble”—why it keeps inflating even as skepticism grows inside the industry.

    We unpack the growing disconnect between massive investment and unclear payoffs, including a widely discussed Goldman Sachs research question: what $1 trillion problem will AI actually solve? From there, we connect the dots between two very different narratives:

    • Dario Amodei’s essay framing “powerful AI” as an imminent civilization-level risk—and a reason to race ahead (carefully… “to some extent”).
    • Cory Doctorow’s argument that this is a familiar tech bubble pattern, with a predictable ending—and that we should focus on what can be salvaged from the wreckage.

    Along the way, we define what makes a bubble a bubble (and how this one differs from dot-com), talk about growth-stock dynamics and why no one in power wants to be responsible for “popping” it, and explore what AI hype looks like when it hits real workplaces—especially through Doctorow’s concept of the reverse centaur: a human reduced to a machine’s accountable appendage.

    We also go nerdy (in the best way): training corpora, “WEIRD” cultural assumptions baked into data, model-collapse fears from AI eating AI-generated output, and why the internet itself feels increasingly polluted by synthetic text patterns.

    In this episode:

    • The “$1T problem” question and why the AI ROI story feels thin right now
    • Why “AI is inevitable” functions like a strategy (not a neutral prediction)
    • Growth stocks vs. mature companies—and the incentive to keep inventing the next hype cycle
    • Reverse centaurs, liability, and why “AI replaces jobs” often means “humans take the blame.”
    • “TINA” (There Is No Alternative) as a trap—and a demand dressed up as an observation
    • Corpus 101: what it is, why it matters, and how bias shows up in “universal” models
    • Model collapse / photocopy-of-a-photocopy: when AI trains on AI outputs
    • Regulation talk that centers on “economic value” (and whose value that really is)
    • Pit & Peach: slowing down, pausing, gratitude, and building without growth pressure

    Sources:

    • Goldman/AI bubble discussion (Deep View): https://archive.thedeepview.com/p/goldman-sachs-publishes-blistering-report-on-ai-bubble
    • Goldman Sachs “$1T spend” framing: https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit
    • Amodei essay: https://www.darioamodei.com/essay/the-adolescence-of-technology
    • Doctorow (The Guardian): https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    1 Std. und 1 Min.
  • Non-Technical Founders Building AI Products: Lessons from Moxie + Tobey’s Tutor (Startup Debrief)
    Jan 28 2026

    In this episode, Kimberly and Jessica debrief Jessica’s interview with Arlyn (founder of Tobey’s Tutor) and unpack what it looks like to build AI products as “non-technical” founders. They reflect on their own journey building Moxie: bootstrapping vs raising money, the pressure-cooker effect of investors, the messy realities of UX/UI and platform migration, the world of APIs and subscriptions, and why “friction” can be an ethical design choice, especially in AI for education.

    In this episode, we talk about

    • Why “non-technical founder” is a misleading label
    • The hope in AI (and how “both can be true”: benefits + harms at once)
    • Bootstrapped “mom-and-pop” AI companies vs venture-backed growth expectations
    • The founder reality: burnout, delegation, and why money changes decision-making
    • The startup metrics whirlwind: LTV, CAC, churn, stickiness, payback period
    • What building an AI product costs in practice: tools, subscriptions, and constant ops
    • UX/UI psychology: heatmaps, “rage clicking,” onboarding friction, and conversion decisions
    • Why “friction” can be good (consent, safety, pacing, limits, especially for kids)
    • “Building on rented land”: what happens when OpenAI/Google/Anthropic change terms
    • The bigger ethical question: solving a problem vs optimizing a broken system

    Suggested listener action

    If you’re building, using, or researching AI in education: reach out. And if you’re using AI tutoring with kids (or yourself), ask questions about data, limits, mistakes, and oversight.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    1 Std. und 2 Min.
  • Vibe Coding and Building AI for Kids: Inside Tobey's Tutor with Arlyn Gajilan
    Jan 21 2026

    In this episode of Women talkin’ ’bout AI, Jessica sits down with Arlyn Gajilan, founder of Tobey’s Tutor, an AI-powered learning support platform she originally built for her son, who has ADHD and dyslexia.

    This conversation is a deep dive into what it actually looks like to build an AI product as a non-technical, bootstrapped founder, from vibe coding and early prototypes to onboarding, safety systems, and pricing decisions.

    Jessica fully geeks out with Arlyn as they unpack:

    • Building AI to solve a deeply personal problem
    • What “vibe coding” can (and can’t) do
    • Designing responsibly for children and learning differences
    • UX vs. UI decisions that matter
    • Bootstrapping, pricing, and intentionally staying small
    • Why “AI wrapper” criticism misses the point
    • The reality of building while parenting and working full-time

    Mentioned in the Episode

    • Tobey’s Tutor: https://tobeystutor.com/
    • Scientific American (article mentioning Tobey’s Tutor): https://www.scientificamerican.com/article/how-one-mom-used-vibe-coding-to-build-an-ai-tutor-for-her-dyslexic-son/
    • Mobbin (UX/UI inspiration library); https://mobbin.com/
    • Empire of AI by Karen Hao: https://www.penguinrandomhouse.com/books/743569/empire-of-ai-by-karen-hao/

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    57 Min.
  • When Everyone Uses AI, What’s Real Anymore?
    Jan 14 2026

    As AI shows up everywhere, something shifts, and it becomes harder to tell what’s human and what’s generated.

    In this episode, Jessica and Kimberly unpack how AI-driven convenience is reshaping education, relationships, identity, and even big systems (like markets and healthcare). They explore signaling, semiotics, and why “perfect” content can feel thin or unreal, and end with small ways to choose more human signals in a noisy world.

    Bonus: If you want to see how this episode ended, tune in on YouTube for a few unfiltered bloopers at the end: https://www.youtube.com/@womentalkinboutai

    Topics we cover in this episode:

    • AI as an invisible intermediary
    • Finding the signal in the noise
    • Higher ed reality check
    • Why AI feels “safer” than people
    • Semiotics
    • The “uncanny valley” of social media
    • AI for therapy + parenting support
    • Cultural swing back

    Not-a-Sponsor Bloopers (YouTube only): Stick around on YouTube for our end-of-episode bloopers, featuring our favorite products that are definitely not sponsoring this show (yet). https://www.youtube.com/@womentalkinboutai



    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    1 Std. und 10 Min.
  • Rest, Resistance, and the Protestant Work Ethic (in the Age of AI)
    Jan 7 2026

    We’re kicking off 2026 with our most personal episode yet.

    This conversation wasn’t planned. We sat down intending to talk about what comes next for the show, and instead found ourselves in a deeper discussion about work, burnout, ambition, and what it means to live in a moment where AI is rapidly reshaping labor, identity, and trust.

    In this episode:

    • Why “work is sacred” feels harder to believe and harder to let go of
    • Burnout, hustle culture, and the cognitive dissonance of automation
    • Labor zero, post-labor economics, and the fear beneath productivity
    • Status, money, degrees, and inherited stories about worth
    • Rest as resistance and nervous system regulation
    • AI, trust erosion, and the danger of slow confusion
    • Dopamine, addiction, and withdrawal at a societal scale
    • Why connection may be the real antidote

    Sources:

    • David Shapiro's Substack on Labor Zero: https://daveshap.substack.com/p/im-starting-a-movement
    • He, She, and It by Marge Piercy: https://en.wikipedia.org/wiki/He,_She_and_It
    • Ethan Mollick's Substack on the temptation of The Button: https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptation
    • Rest Is Resistance by Tricia Hersey: https://blackgarnetbooks.com/item/oR7uwsLR1Xu2xerrvdfsqA
    • The Last Invention (AI Podcast): https://podcasts.apple.com/us/podcast/the-last-invention/id1839942885

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    1 Std. und 3 Min.
  • Best of 2025: AI, Work, Resistance, and What We Learned
    Dec 31 2025

    Best of 2025 brings together some of the most impactful conversations from this year on Women Talkin’ Bout AI.

    In this episode, we revisit our top 5 episodes of the year:

    • Beyond Work: Post-Labor Economics with David Shapiro: A conversation about automation, empathy, and what remains uniquely human as AI reshapes work.
    • Refusing the Drumbeat with Melanie Dusseau and Miriam Reynoldson: A discussion on resistance in higher education and their open letter refusing the push to adopt generative AI in the classroom.
    • Once You See It, You Can’t Unsee It: The Enshittification of Tech Platforms: Jessica and Kimberly unpack enshittification and why so many tech platforms feel like they get worse over time.
    • Maternal AI and the Myth of Women Saving Tech with Michelle Morkert: A critical examination of “maternal AI” and what gendered narratives reveal about power and responsibility in tech.
    • Competing with Free: Why We Closed Moxie: A candid reflection on what it was like to build, and ultimately shut down, an AI startup in this moment.

    We’re heading into 2026 with some incredible guests and conversations we can’t wait to share.

    Thank you for listening, for thinking with us, and for staying curious alongside us.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    39 Min.
  • The Trojan Horse of AI
    Dec 24 2025

    In this final guest episode of the year, we explore AI as a kind of Trojan horse: a technology that promises one thing while carrying hidden costs inside it. Those costs show up in data centers, energy and water systems, local economies, and the communities asked to host the infrastructure that makes AI possible.

    We’re joined by Jon Ippolito and Joline Blais from the University of Maine for a conversation that starts with AI’s environmental footprint and expands into questions of extraction, power, education, and ethics.

    In this episode, we discuss:

    • Why AI can function as a Trojan horse for data extraction and profit
    • What data centers actually do, and why they matter
    • The environmental costs hidden inside “innovation” narratives
    • The difference between individual AI use and industrial-scale impact
    • Why most data center activity isn’t actually AI
    • How communities are pitched data centers—and what’s often left out
    • The role of gender in ethical decision-making in tech
    • What AI is forcing educators to rethink about learning and work
    • Why asking “Who benefits?” still cuts through the hype
    • And how dissonance can be a form of clarity

    Resources mentioned:

    • IMPACT Risk framework: https://ai-impact-risk.com
    • What Uses More:
      https://what-uses-more.com

    Guests:

    • Jon Ippolito – artist, writer, and curator who teaches New Media and Digital Curation at the University of Maine.
    • Joline Blais – researches regenerative design, teaches digital storytelling and permaculture, and advises the Terrell House Permaculture Center at the University of Maine.

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    1 Std. und 21 Min.
  • Easy for Humans, Hard for Machines: The Paradox Nobody Talks About
    Dec 17 2025

    Why can AI crush law exams and chess grandmasters, yet still struggle with word games? In this episode, Kimberly and Jessica use Moravec's Paradox to unpack why machines and humans are "smart" in such different ways—and what that means for how we use AI at work and in daily life.

    They start with a practical fact-check on agentic AI: what actually happens to your data when you let tools like ChatGPT or Gemini access your email, calendar, or billing systems, and which privacy toggles are worth changing. From there, they dive into why AI fails at the New York Times' Connections game, how sci-fi anticipated current concerns about AI psychology decades ago, and what brain-computer interfaces like Neuralink tell us about embodiment and intelligence.

    Along the way: sycophantic bias, personality tests for language models, why edtech needs more friction, and a lighter "pit and peach" segment with unexpected life hacks.

    Resources by Topic

    Privacy & Security (ChatGPT)

    OpenAI Memory & Controls (Official Guide)

    OpenAI Data Controls & Privacy FAQ

    OpenAI Blog: Using ChatGPT with Agents

    Moravec's Paradox & Cognitive Science

    Moravec's Paradox (Wikipedia)

    "The Moravec Paradox" - Research Paper

    Sycophancy & LLM Behavior

    "Sycophancy in Large Language Models: Causes and Mitigations" (arxiv)

    "Personality Testing of Large Language Models: Limited Temporal Stability, but Highlighted Prosociality"

    Brain-Computer Interfaces & Embodied AI

    Neuralink: "A Year of Telepathy" Update

    Leave us a comment or a suggestion!

    Support the show

    Contact us: https://www.womentalkinboutai.com/








    Mehr anzeigen Weniger anzeigen
    46 Min.