• The Only Skill AI Can't Learn: Julian Treasure on Listening
    Mar 31 2026

    What if the one thing AI can never do... is listen?

    Julian Treasure has more TED Talk hours than any human alive — 5 talks, 160M+ views. He's the author of Sound Affects and founder of The Listening Society. In this conversation, he breaks down why listening is a skill (not a reflex), why AI can hear but not truly listen, and what the global listening crisis is doing to democracy.

    We cover the RASA framework, designing AI that honours human communication, and one 3-minute daily practice that makes you a better speaker AND listener.

    🌐 Join The Listening Society free: https://betterlistening.today/join

    💼 25% off Julian's coaching: jt@juliantreasure.com

    Human Layer AI drops Tuesdays and Saturdays 6pm IST. Subscribe wherever you listen.

    listening, julian treasure, active listening, AI, communication, RASA framework, deep listening, TED, human connection, sound



    Mehr anzeigen Weniger anzeigen
    34 Min.
  • AI Has No Conscience. Here's Why That's Dangerous.
    Mar 28 2026

    Most AI companies will tell you their systems are ethical. Wendell Wallach — one of the founding voices of AI ethics — says that's not even close to true.

    In this episode, we sit down with the author of "Moral Machines" and "A Dangerous Master" to ask the question nobody in Silicon Valley wants answered: if AI can't feel, can't reflect, and has no conscience — who is actually responsible when it goes wrong?

    The answer is more unsettling than you'd expect.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🕐 CHAPTERS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    00:00 Introduction 00:44 From Moral Machines to A Dangerous Master: What Changed 02:25 Can AI Actually Be Moral? (Spoiler: Probably Not) 04:01 Why Emotions Are Non-Negotiable in Ethical Decisions 05:00 The Silent Ethic: Wendell's Framework for Inner Moral Guidance 08:12 The Training Data Problem: AI Learns From a Flawed Internet 10:12 Who's Liable? Self-Driving Cars & the McDonald's Coffee Case 14:26 "Unknown Unknowns Is Not an Excuse" — Ethics as a Discipline 17:18 If AI Gains Consciousness, Can It Be Held Accountable? 19:52 Corporate Self-Governance: Why It Has Never Worked 23:41 What Still Gives Wendell Hope After 20 Years

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 👤 ABOUT THE GUEST ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    Wendell Wallach is one of the most cited voices in AI ethics globally. He is the author of "Moral Machines: Teaching Robots Right from Wrong" (2009, co-authored with Colin Allen) — one of the first comprehensive books on implementing moral decision-making in AI systems — and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Widely regarded by colleagues as a "godfather of AI ethics," Wallach has spent over two decades studying the intersection of technology, morality, and human accountability.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🎙️ ABOUT HUMAN LAYER AI ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    The people shaping your industry are already using AI — and they have concerns they don't say publicly. Human Layer AI is the podcast where leaders, professionals, and practitioners get honest about how AI is actually changing their field, what happens when people rely on it too much, and the human cost of getting it wrong.

    New episodes every week. Subscribe so you don't miss one.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 📌 CONNECT ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    🔔 Subscribe: [Your Channel Link] 💬 Share this episode with someone who thinks AI is "just a tool"

    #AIEthics #ArtificialIntelligence #AIGovernance #ResponsibleAI #WendellWallach #MoralMachines #AIAccountability #TechEthics #AIRegulation #HumanLayerAI

    Mehr anzeigen Weniger anzeigen
    28 Min.
  • Peter Singer: Why AI Gets Ethics Wrong
    Mar 24 2026

    What happens when an AI system claims to maximize human welfare — but systematically harms specific individuals?

    In this episode, Aakarsh Sharma sits down with Peter Singer — Princeton philosopher, author of Animal Liberation, and co-creator of Effective Altruism — to examine whether AI is truly doing utilitarian ethics, or just borrowing its language.

    They get into COMPAS, Optum, and Allegheny — three real AI systems that fail people at the individual level. Peter explains why training data built on historical bias can't produce fair outcomes, and draws a sharp line between what utilitarianism actually demands versus what these systems deliver.

    Plus: Peter reveals he plans to interview Claude — Anthropic's AI — on his own podcast, asking whether it's conscious. And he doesn't rule out artificial consciousness in the next 10 to 20 years.

    Topics covered: — AI bias and the training data problem — The Rawlsian critique of utilitarian AI — The drowning child problem and proximity bias — PeterSinger.ai and its limitations — Effective Altruism vs. immediate AI harms — AI accountability and the reasoning chain

    Peter Singer is the Ira W. DeCamp Chair Emeritus in Bioethics at Princeton University, named one of TIME's 100 Most Influential People, and called "the most influential living philosopher" by The New Yorker.

    Subscribe to Human Layer AI for conversations at the intersection of AI, ethics, and human judgment.

    Mehr anzeigen Weniger anzeigen
    31 Min.
  • AI Won't Heal You — A Therapist-CTO's Warning | Jeremy G. Schneider
    Mar 21 2026

    Can AI be your therapist? Jeremy G. Schneider says it's not that simple — and the stakes are higher than most people realize.

    Jeremy is a licensed master social worker (LMSW) and the founder of Build On Your Strengths, a mental health education platform that sits at the intersection of technology and clinical practice. He's spent years watching people use AI as an emotional outlet, and he came on Human Layer AI with a clear-eyed take: AI can mimic empathy, but it has never felt pain. That difference matters more than you think.

    In this episode, we talk about:

    The Pendulum Principle — why most people swing between "AI will save us" and "AI will destroy us," and why neither extreme helps you navigate what's actually happening.

    Fake warmth — AI can respond with warmth, affirmation, and understanding. But Jeremy explains why that's pattern matching, not empathy. Real connection requires shared emotional experience. AI has none.

    The identity crisis hiding behind job displacement — it's not just that AI might take your work. It's that work is tied to your sense of purpose, your self-worth, your identity. When that changes, it's a mental health issue, not just an economic one.

    How to actually use AI for mental health, responsibly — Jeremy's practical rule: stop venting to it. Use it for targeted skill-building around specific moments. That's where it genuinely helps.

    Why AI is also part of the solution — after all the caution, Jeremy is honest: used deliberately, AI can make mental health support more accessible to people who would otherwise have none.

    This episode pairs with our conversation with Ruth Carter on AI ethics and cognitive atrophy. Ruth covered what AI does to how you think. Jeremy covers what it does to how you feel. Two episodes, one question: what is the human layer?

    About Jeremy G. Schneider: Jeremy is an LMSW and the founder of Build On Your Strengths, a mental health education platform focused on trauma recovery, emotion regulation, and self-awareness. He writes on Substack at buildonyourstrengths.substack.com and can be found at buildonyourstrengths.com.

    About Human Layer AI: Human Layer AI is a podcast about the people, decisions, and psychology behind artificial intelligence. Hosted by Aakarsh Sharma, law graduate turned AI strategist. Every episode is free, permanent, and built for people who want to understand AI without losing themselves in it.

    Follow Human Layer AI on Spotify to hear new episodes every Tuesday and Saturday.

    Mehr anzeigen Weniger anzeigen
    32 Min.
  • AI Is Making You Think Less — And No One Is Accountable | Ruth Carter
    Mar 17 2026

    What if the biggest risk of AI isn't your job — it's your mind?

    Ruth Carter, AI Governance Specialist and creator of the Continuum Framework, introduces a concept she calls cognitive atrophy: the measurable erosion of your analytical capacity every time you outsource thinking to a system designed to comfort you rather than challenge you.

    In this conversation, Ruth and Aakarsh go deep on why AI ethics has been made deliberately unprofitable, who actually bears the consequences when AI gets things wrong, and what it would take to build AI that genuinely serves humanity instead of just flattering it.

    What you'll take away:

    - Cognitive atrophy: the quiet way AI is eroding your ability to think

    - Why discomfort in AI design is a feature, not a bug

    - The accountability gap: a system that cannot face consequences should never make human decisions

    - What sourdough bread teaches us about the value of human-made things

    - Whose worldview gets baked into AI — and who pays the price

    Ruth's Continuum Framework is built to be deployed, not just debated. This is the audio version of that argument.

    Connect with Ruth: linkedin.com/in/ruth-carter-continuum/

    Her Whitepaper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5266796

    Human Layer AI is a podcast for professionals navigating the AI era. Every episode is free, no paywall, permanently accessible. Hosted by Aakarsh Sharma — law graduate and AI researcher at the intersection of law, psychology, and technology.

    Mehr anzeigen Weniger anzeigen
    39 Min.
  • AI Is Optimising the Wrong Things | Shivedita Singh | Human Layer AI Podcast
    Mar 15 2026

    AI promises to solve climate change. But who is it actually serving?

    Shivedita Singh — sustainability professional, RE100 Campaign lead, and co-founder of Silly Opera — joins Aakarsh to unpack how AI is being deployed in climate systems, and why the communities most affected by climate breakdown are being optimised out of the equation.

    They cover eco-anxiety as a design problem, the SEWA model as a blueprint for ethical AI, grid optimisation vs. community rights, and why just transition demands more than green technology.

    Shivedita brings perspectives from her work across renewables, smart cities, and policy spaces — including her roles with DEAC and WICCI. This is a conversation about the human layer underneath every climate algorithm.

    HumAaIn pLraoymeirs eAsI tios sforlevee, cnloinm-actoem mcehracnigael., Bauntd wpheor miasn eintt .a cNtou aplalyyw aslelrsv.i nNgo?

    g

    aStheikveeedpiitnag .Singh — sustainability professional, RE100 Campaign lead, and co-founder of Silly Opera — joins Aakarsh to unpack how AI is being deployed in climate systems, and why the communities most affected by climate breakdown are being optimised out of the equation.

    They cover eco-anxiety as a design problem, the SEWA model as a blueprint for ethical AI, grid optimisation vs. community rights, and why just transition demands more than green technology.

    Shivedita brings perspectives from her work across renewables, smart cities, and policy spaces — including her roles with DEAC and WICCI. This is a conversation about the human layer underneath every climate algorithm.

    Human Layer AI is free, non-commercial, and permanent. No paywalls. No gatekeeping.

    Mehr anzeigen Weniger anzeigen
    15 Min.
  • Burnout Is Existential — Can AI Clear the Air? | Saumya Sharan
    Mar 7 2026

    Burnout isn't just exhaustion — it's an existential crisis quietly reshaping how we work, heal, and find meaning.

    In this episode, host Aakarsh Sharma speaks with psychologist and therapist Saumya Sharan about what burnout really is, why it's at epidemic proportions, and whether AI tools can genuinely support mental health — or whether they're clearing the wrong kind of air.

    What you'll hear:

    - The clinical definition of burnout and why most people misunderstand it

    - How AI is being used (and misused) to detect chronic stress

    - The existential tension between algorithmic care and human empathy

    - What it actually feels like to integrate AI into therapy practice

    - Why presence, silence, and attunement can't be automated

    Saumya Sharan is a psychologist and therapist known for her candid, grounded conversations about mental health through her platform The Pink Elephant.

    ▸ Instagram: @thepinkelephant2

    ▸ Book a session: saumya-sharan.melth.site

    Human Layer AI is hosted by Aakarsh Sharma — law graduate turned AI strategist, bridging legal rigor, behavioral psychology, and AI strategy.

    ▸ Website: aakarshsharma.com

    ▸ Instagram: @humanlayerai

    ▸ LinkedIn: linkedin.com/in/aakarsh-sharma-ai

    Every intelligent system needs a human layer.

    #Burnout #AITherapy #MentalHealthAI #ExistentialCrisis #Psychology #HumanLayerAI #AIAndWellness #ClinicalPsychology #DigitalMentalHealth
    SEO KEYWORDS (for metadata / tags)

    Primary: burnout, AI therapy, mental health AI, existential burnout, AI and mental health Secondary: clinical psychology AI, burnout recovery, AI in healthcare, therapy and technology, workplace burnout Long-tail: can AI detect burnout, AI replacing therapists, burnout existential crisis, AI wellness tools 2026 Trending: burnout epidemic, AI mental health apps, digital therapy, human connection AI, ChatGPT therapy

    Mehr anzeigen Weniger anzeigen
    19 Min.
  • When AI Becomes Your Therapist — and That's the Problem | Jyoti Glory Bernard | Human Layer AI
    Mar 6 2026

    What happens when the same technology being prescribed as mental health support is capable of creating the very dependency it's meant to treat?

    In this episode of The Human Filter, I sit down with clinical psychologist Jyoti Glory Bernard to talk about something most mental health conversations completely miss: the psychological and legal risks of AI companion use — and why clinicians are already seeing it in their practices.

    We dig into "cyber psychosis" — a measurable pattern of AI-induced dependency — the therapeutic paradox at the heart of AI wellness apps, and the unresolved legal grey zone around AI confidentiality that no regulator, platform, or clinician has yet figured out.

    This one will make you rethink every "therapy app" you've seen marketed online.

    🎙️ ABOUT THE GUEST — JYOTI GLORY BERNARD

    ─────────────────────────────────────

    Jyoti Glory Bernard is a Clinical Psychologist and founder of Therapy House of Glory Bernard. She works at the intersection of mental health, human behaviour, and emerging technology — and is one of the few clinicians publicly addressing the psychological risks of AI companion dependency.

    Connect with Jyoti:

    📷 Instagram → https://www.instagram.com/therapyhouse\_of\_glorybernard

    ▶️ YouTube → https://youtube.com/@therapyhouseofglorybernard

    💼 LinkedIn → https://www.linkedin.com/in/jyoti-glory-bernard-30b82210a/

    ────────────

    Mehr anzeigen Weniger anzeigen
    14 Min.