Human Layer AI Titelbild

Human Layer AI

Human Layer AI

Von: Aakarsh Sharma
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

Human Layer AI explores how leaders, creators, and professionals use artificial intelligence without losing the human edge.

Hosted by Aakarsh Sharma — law graduate (BA LL.B. Hons), applied psychology background, and AI strategist. Every episode stress-tests how AI actually lands in real people's work: the decisions it changes, the professions it disrupts, and the layer of human judgment it still can't replace.

Each conversation sits at the intersection of AI accountability, AI ethics, and cognitive bias in AI — examining how technology reshapes decision-making, communication, and professional responsibility.

Topics we cover:

→ AI governance and the EU AI Act

→ AI bias and how it distorts decisions

→ Professional displacement across law, medicine, and finance

→ AI and psychology — how AI is changing therapy, diagnosis, and mental health

→ Artificial intelligence and law — legal liability, IP, and the courtroom

→ Legal Tech — AI tools transforming legal practice

→ AI ethics in practice — not theory, but real decisions

Guests include psychiatrists, patent lawyers, AI founders, policy researchers, and practitioners navigating the AI shift in their field.

New episodes every week.

Subscribe if you believe every intelligent system needs a human layer.

2026 Aakarsh Sharma
Sozialwissenschaften Wissenschaft
  • The Only Skill AI Can't Learn: Julian Treasure on Listening
    Mar 31 2026

    What if the one thing AI can never do... is listen?

    Julian Treasure has more TED Talk hours than any human alive — 5 talks, 160M+ views. He's the author of Sound Affects and founder of The Listening Society. In this conversation, he breaks down why listening is a skill (not a reflex), why AI can hear but not truly listen, and what the global listening crisis is doing to democracy.

    We cover the RASA framework, designing AI that honours human communication, and one 3-minute daily practice that makes you a better speaker AND listener.

    🌐 Join The Listening Society free: https://betterlistening.today/join

    💼 25% off Julian's coaching: jt@juliantreasure.com

    Human Layer AI drops Tuesdays and Saturdays 6pm IST. Subscribe wherever you listen.

    listening, julian treasure, active listening, AI, communication, RASA framework, deep listening, TED, human connection, sound



    Mehr anzeigen Weniger anzeigen
    34 Min.
  • AI Has No Conscience. Here's Why That's Dangerous.
    Mar 28 2026

    Most AI companies will tell you their systems are ethical. Wendell Wallach — one of the founding voices of AI ethics — says that's not even close to true.

    In this episode, we sit down with the author of "Moral Machines" and "A Dangerous Master" to ask the question nobody in Silicon Valley wants answered: if AI can't feel, can't reflect, and has no conscience — who is actually responsible when it goes wrong?

    The answer is more unsettling than you'd expect.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🕐 CHAPTERS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    00:00 Introduction 00:44 From Moral Machines to A Dangerous Master: What Changed 02:25 Can AI Actually Be Moral? (Spoiler: Probably Not) 04:01 Why Emotions Are Non-Negotiable in Ethical Decisions 05:00 The Silent Ethic: Wendell's Framework for Inner Moral Guidance 08:12 The Training Data Problem: AI Learns From a Flawed Internet 10:12 Who's Liable? Self-Driving Cars & the McDonald's Coffee Case 14:26 "Unknown Unknowns Is Not an Excuse" — Ethics as a Discipline 17:18 If AI Gains Consciousness, Can It Be Held Accountable? 19:52 Corporate Self-Governance: Why It Has Never Worked 23:41 What Still Gives Wendell Hope After 20 Years

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 👤 ABOUT THE GUEST ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    Wendell Wallach is one of the most cited voices in AI ethics globally. He is the author of "Moral Machines: Teaching Robots Right from Wrong" (2009, co-authored with Colin Allen) — one of the first comprehensive books on implementing moral decision-making in AI systems — and "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" (2015). Widely regarded by colleagues as a "godfather of AI ethics," Wallach has spent over two decades studying the intersection of technology, morality, and human accountability.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 🎙️ ABOUT HUMAN LAYER AI ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    The people shaping your industry are already using AI — and they have concerns they don't say publicly. Human Layer AI is the podcast where leaders, professionals, and practitioners get honest about how AI is actually changing their field, what happens when people rely on it too much, and the human cost of getting it wrong.

    New episodes every week. Subscribe so you don't miss one.

    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 📌 CONNECT ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

    🔔 Subscribe: [Your Channel Link] 💬 Share this episode with someone who thinks AI is "just a tool"

    #AIEthics #ArtificialIntelligence #AIGovernance #ResponsibleAI #WendellWallach #MoralMachines #AIAccountability #TechEthics #AIRegulation #HumanLayerAI

    Mehr anzeigen Weniger anzeigen
    28 Min.
  • Peter Singer: Why AI Gets Ethics Wrong
    Mar 24 2026

    What happens when an AI system claims to maximize human welfare — but systematically harms specific individuals?

    In this episode, Aakarsh Sharma sits down with Peter Singer — Princeton philosopher, author of Animal Liberation, and co-creator of Effective Altruism — to examine whether AI is truly doing utilitarian ethics, or just borrowing its language.

    They get into COMPAS, Optum, and Allegheny — three real AI systems that fail people at the individual level. Peter explains why training data built on historical bias can't produce fair outcomes, and draws a sharp line between what utilitarianism actually demands versus what these systems deliver.

    Plus: Peter reveals he plans to interview Claude — Anthropic's AI — on his own podcast, asking whether it's conscious. And he doesn't rule out artificial consciousness in the next 10 to 20 years.

    Topics covered: — AI bias and the training data problem — The Rawlsian critique of utilitarian AI — The drowning child problem and proximity bias — PeterSinger.ai and its limitations — Effective Altruism vs. immediate AI harms — AI accountability and the reasoning chain

    Peter Singer is the Ira W. DeCamp Chair Emeritus in Bioethics at Princeton University, named one of TIME's 100 Most Influential People, and called "the most influential living philosopher" by The New Yorker.

    Subscribe to Human Layer AI for conversations at the intersection of AI, ethics, and human judgment.

    Mehr anzeigen Weniger anzeigen
    31 Min.
Noch keine Rezensionen vorhanden