• Charlie Hurst, Tom Noble and Will Sudlow on Flat White or F*ck Off
    Feb 22 2026
    What happens when someone runs with a business idea they've heard as a thought experiment on a podcast? Can a business have an expletive in its name? And is it possible to run a business that sells a single very specific product?Episode SummaryOn this episode, I’m joined by Charlie Hurst, Tom Noble and Will Sudlow — the founders of Flat White or F*ck Off*, a coffee brand inspired by a thought experiment by friend of the show,Rory Sutherland. The concept is simple: sell one thing — flat whites — and if you want something else… the answer’s in the name. ⚠️ *Given the name of the business, this episode contains a lot of swearing!Within four months of hearing the idea on Jamie Laing’s Great Company podcast, they’d banded together — having never met but being isnpired to give the business a go — built a brand, grown an audience of tens of thousands, and served 1,500 flat whites in a single day at a London pop-up. Most people would've treated Rory's idea as an interesting thought experiment. But Charlie, Tom and Will decided — with Rory's blessing — to actually build it.In an extended conversation, we explore what it means to:Build a brand before you have a productGrow an audience before you open a shopShare your financials publiclyDeliberately polarise rather than pleaseDiscover why Charlie, Tom and Will spent £22,000 on a one-day loss-making pop-that served as a live experiment; part marketing, part proof of concept, part behavioural case study.We discuss why constraint can be liberating, why queues affect perceived quality, how social proof shapes demand, and why narrowing your audience can be more powerful than trying to attract everyone.This isn’t just a story about coffee. It’s about conviction, creative constraint and what happens when you deliberately ignore conventional business wisdom.Guest Bios Charlie HurstDesigner and brand builder. Charlie created the original visual identity for Flat White or F*ck Off after seeing Rory’s idea online.Tom NobleEntrepreneur and digital builder. Tom documented the entire journey in public, helping grow the brand’s audience before a single coffee was sold.Will SudlowCo-founder of experiential agency The Impossible. Will brought production expertise to turn the idea into a large-scale pop-up event.AI-Generated Timestamped Summary00:00 – From Thought Experiment to Real Business: why this is more than a coffee story. 03:00 – Hearing Rory’s Idea: how Charlie, Tom and Will discovered the concept and decided to act on it.08:00 – Building in Public: growing an audience before having a physical product; documenting everything online.15:00 – One Product Only: why selling just flat whites is a strategic constraint — and a behavioural signal. 25:00 – The Pop-Up Experiment: erving 1,500 coffees in a day; spending £27,000 as a marketing investment.35:00 – Polarisation & Backlash: criticism, online sceptics and why not being for everyone is the point.50:00 – Perception, Queues & Behaviour: what they learned about speed, quality signals and social proof.01:05:00 – Risk, Conviction & Entrepreneurship: why building something in public is both terrifying and liberating.01:20:00 – What Happens Next: scaling, experimentation and staying true to the core idea. LinksRory on Jamie Laing’s Great Company podcast - https://shows.acast.com/great-company/episodes/rory-sutherland Flat White or F*ck Off - https://flatwhiteorfckoff.com/Instagram - https://www.instagram.com/flatwhiteorfckoff/TikTok - https://www.tiktok.com/@flatwhiteorfckoff/ LinkedIn - https://www.linkedin.com/company/flat-white-or-fck-off/ The co-foundersTom on LinkedIn - https://www.linkedin.com/in/thomasnoble1992/ Charlie on LinkedIn - https://www.linkedin.com/in/charlie-hurst-715364150/Will on LinkedIn - https://www.linkedin.com/in/willsudlow/Ask The Impossible - https://asktheimpossible.com/Rory's appearances on this show:https://www.humanriskpodcast.com/rory-sutherland-on-compliance/ https://www.humanriskpodcast.com/rory-sutherland-paul-craven-on-alchemy-magic/ https://www.humanriskpodcast.com/gerald-ashley-rory-sutherland/ https://www.humanriskpodcast.com/rory-sutherland-gerald-ashley-paul-craven-at-abbey-road-part-one/
    Mehr anzeigen Weniger anzeigen
    1 Std. und 16 Min.
  • Amy Watson on Violence Against Women & Girls
    Feb 15 2026
    What if we stopped telling women how to stay safe, and started asking why violence against them keeps happening in the first place? On this episode, I’m joined for a second time, by Amy Watson, the founder of social enterprise HASSL. She’s trying to tackle violence against women and girls at its root. Not with another awareness campaign or safety app. But by building a global movement designed to shift responsibility away from women, and onto society.

    Overview
    When Amy first joined the podcast a year ago, we discussed the scale and reality of violence against women. A year on, she returns to talk about what it actually takes to tackle it.
    In just twelve months, her social enterprise HASSL has grown into a global prevention movement: more than half a million followers, thousands of volunteers across over 120 countries, and campaigns reaching millions of people organically.

    But this isn’t just a story about social media growth. It’s about culture change. In an extended and wide-ranging disucssion, we explore why laws alone don’t solve systemic problems, why “stay safe” advice can unintentionally reinforce the wrong narrative, and what happens when you apply entrepreneurial thinking to one of society’s most entrenched issues.

    This is a conversation about scale, backlash, risk and moral ambition, and about what it means to build something that refuses to compromise.

    Guest Bio - Amy Watson
    Amy is the founder of HASSL, a global social enterprise tackling harassment at the root.

    HASSL focuses on prevention — shifting responsibility for violence away from women as individuals and onto the cultural and systemic factors that enable harm. Combining research, education and partnerships, it aims to create scalable, long-term change rather than short-term fixes.

    In just over a year, HASSL has grown into a global movement with hundreds of thousands of followers and volunteers across more than 120 countries.

    Amy’s work sits at the intersection of social justice and entrepreneurship, applying business thinking to one of society’s most entrenched problems.

    AI-Generated Timestamped Summary
    00:00 – Intro: From Problem to Action

    Christian frames this follow-up as a shift from discussing violence against women to exploring what it takes to tackle it in practice.

    02:00 – What HASSL Stands For
    Amy explains HASSL’s prevention-first approach: shifting responsibility away from women and onto culture, systems and male behaviour.

    05:00 – Scaling a Social Enterprise
    Rapid global growth, research-driven strategy, sustainable funding streams and a structured five-stage plan.

    08:30 – Education & Engaging Men
    Launch of free education resources, bystander tools and conversation frameworks designed to invite men into the solution.

    16:00 – Entrepreneurship, Risk & Moral Ambition
    Applying startup thinking to social change; sacrificing financial ambition for impact; long-term vision over quick wins.

    35:00 – Values, Independence & Leadership
    Why Amy avoids outside investment, refuses to compromise on inclusivity, and builds operational resilience into the organisation.

    58:30 – Backlash & Online Abuse T
    rolling, hate messages and the deliberate disruption of a webinar — and what that reveals about cultural normalisation.

    01:05:00 – Using Criticism as Leverage
    Turning recurring myths (“false accusations”, “what about men?”) into educational opportunities and narrative shifts.

    01:21:00 – Barriers to Reporting Why speaking out rarely benefits women; the structural and social costs involved.

    01:37:00 – Building a Movement How listeners can engage — and why lasting change requires persistence, scale and collective responsibility.

    Links
    Amy’s previous appearance on the show - https://www.humanriskpodcast.com/amy-watson-on-violence-against-women/

    HASSL - hassl.uk

    Moral Ambition by Rutger Bregman - https://www.moralambition.org/book
    Mehr anzeigen Weniger anzeigen
    1 Std. und 40 Min.
  • Professor Veronica Root Martinez on Purpose-Driven Compliance
    Feb 7 2026
    Who determines what 'good' Compliance actually looks like? The obvious answer is regulators (and in some jurisdictions) prosecutors. But what if it were the regulated Firms themselves? That's the idea behind purpose-driven compliance, which I'm exploring on this episode.Episode Summary To explore this, I'm joined by Veronica Root Martinez, Professor of Law at Duke University School of Law, to explore a deceptively simple but unsettling idea: 100% compliance is impossible. While we often behave as though perfect compliance is the goal — and in some safety-critical domains it must be — most organisational compliance involves humans. And humans make mistakes. Things get missed. Context changes. Stuff goes wrong.So if perfection isn’t realistic, the real question becomes: how do organisations decide what really matters? The traditional answer has been to look outward — to regulators, enforcement authorities, and in some jurisdictions (particularly the US), prosecutors. Their priorities, expressed through sentencing guidelines, enforcement actions, and settlements, end up defining what “good” compliance looks like. Veronica challenges that logic. She argues that this gets things the wrong way round. Instead of letting enforcement priorities dictate behaviour, she makes the case for purpose-driven compliance — where organisations set their own priorities based on their purpose, values, and actual risks, rather than chasing shifting regulatory expectations. Along the way, the conversation explores culture, human judgment, psychological safety, technology, experimentation, and why “best practice” can sometimes make things worse rather than better. This episode is for anyone who writes rules, enforces them — or simply has to live under them.Guest BiographyVeronica Root Martinez is a Professor of Law at Duke University School of Law, where she researches corporate compliance, ethics, and organisational culture. Her work on purpose-driven compliance challenges enforcement-led models and explores how organisations can set priorities based on their own purpose, values, and risks.Before entering academia, Veronica practised as an associate at a large law firm in Washington, DC, where she worked on regulatory and white-collar matters — experience that strongly informs the practical orientation of her research.LinksProfessor Veronica Root Martinez – Faculty Profilehttps://law.duke.edu/fac/martinezVeronica on LinkedInhttps://www.linkedin.com/in/veronica-root-martinez/Purpose-Driven Compliance (paper discussed in the episode)https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6078766AI-Generated Timestamped Summary00:00 – 02:00 | “Because they said so”Christian reframes compliance as a universal human experience — not just a professional discipline — and introduces the problem of rules justified solely by regulatory expectation.02:00 – 05:30 | Why 100% compliance is impossibleVeronica explains why modern organisations cannot realistically achieve perfect compliance when humans are involved — and why pretending otherwise creates problems.05:30 – 10:30 | Tolerated misconduct and cultural driftHow allowing “small” rule-breaking can escalate into bigger issues, drawing on behavioural ethics and real-world corporate failures. 10:30 – 14:30 | Risk, prioritisation, and what really mattersA discussion of risk-based thinking, irrecoverable vs recoverable errors, and why organisations — not regulators — are best placed to set priorities. 14:30 – 18:30 | Enforcement swings and resilienceWhy compliance programmes built around enforcement trends are fragile, expensive, and reactive — and how purpose-driven approaches create stability. 18:30 – 23:30 | Innovation, uncertainty, and guardrailsWhy regulators are always behind innovation — and how values-based guardrails help employees make decisions in uncharted territory.23:30 – 30:30 | Technology, AI, and the human in the loopThe limits of automation, the danger of over-reliance on tech, and why human judgment remains essential.30:30 – 36:30 | Rules, loopholes, and malicious complianceHow overly detailed rulebooks create loopholes — and why purpose and principles offer a better basis for accountability.36:30 – 40:30 | The Costco exampleA powerful illustration of simplicity: four ethical principles that employees can actually understand and use.40:30 – 45:30 | Training, regulators, and unintended consequencesWhy blanket training requirements often miss the mark — and how enforcement agreements can accidentally undermine effectiveness.45:30 – 52:30 | Measuring culture and compliance effectivenessMoving beyond counting inputs to assessing outputs, including psychological safety, Speak Up systems, and cultural indicators.52:30 – 57:30 | Experimentation and learningWhy failed interventions aren’t failure — they’re information — and why compliance should be treated as an evolving experiment.57:30 – End | ...
    Mehr anzeigen Weniger anzeigen
    1 Std. und 2 Min.
  • Professor Tina Weisser on Trusting AI In An Uncertain World
    Jan 27 2026
    As Artificial Intelligence (AI) gets smarter and tkaes over more tasks, what happens to human dynamics like trust, transparency, leadership and empathy. How can humans and machines wowrk togehter effectively? And how can leaders lead in this new world?


    Episode Summary
    AI is often discussed as a technical challenge, but the more interesting question is how it impacts humans and how we will interface with them. As AI becomes part of the world we’re navigating, it raises deeply human questions about trust, transparency, confidence, and how we relate to systems we don’t fully understand.

    On this episode, I'm joined by Professor Tina Weisser, a leading thinker on human–AI collaboration, systems thinking, and organisational behaviour under uncertainty. Together, we explore why trust isn’t something we can engineer into technology, why uncertainty isn’t a problem to be eliminated, and what AI may be revealing about human behaviour, rather than the other way around. This conversation is less about what AI can do, and more about what it does to us.

    Guest Profile
    Professor Tina Weisser is a Professor at the Munich University of Applied Sciences and a member of the Munich Center for Digital Sciences and Artificial Intelligence (MUC-DAI). Her work focuses on human–AI collaboration, systems thinking, service design, and how organisations adapt under conditions of complexity and uncertainty.

    AI-Generated Timestamp Summary
    00:00 – AI as a human problem, not a technical one
    04:00 – Tina’s path into human–AI collaboration
    12:00 – Why uncertainty is unavoidable (and necessary)
    18:00 – We haven’t mastered work — and now we’re adding AI
    23:00 – From tools to agents: why this feels different
    29:00 – Trusting actions, not facts
    35:00 – Ethics, fear, and human inconsistency
    42:00 – What this means for students, skills, and learning
    49:00 – “Let AI handle the data — humans handle the room”
    55:00 – Being right too early doesn’t help
    1:01:00 – AI as a mirror of humanity

    Episode Links
    Tina's LinkedIn profile - https://www.linkedin.com/in/tinaweisser/

    Tina's website - www.tinaweisser.com

    Munich Center for Digital Sciences & AI (MUC-DAI) - http://mucdai.hm.edu
    Mehr anzeigen Weniger anzeigen
    1 Std. und 9 Min.
  • Becky Holmes on Romance Scams
    Jan 21 2026
    What lies behind Romance Fraud? Romance fraud is one of the fastest-growing forms of fraud worldwide, and one of the most emotionally devastating. It’s also one of the most misunderstood.On this episode, I’m speaking to Becky Holmes, author of the bestselling book Keanu Reeves Is Not in Love With You. Becky didn’t become interested in romance fraud through victimhood or research. She stumbled into it during the pandemic after being approached by scammers online — and instead of ignoring them, she decided to wind them up. What began as a joke — sending absurd messages, inventing ridiculous scenarios, and pushing scam scripts to breaking point — turned into something much more serious. Through humour, Becky uncovered the psychological mechanics of romance fraud: how trust is built, how isolation and gaslighting work, and why believing you’re “too smart to fall for it” is often the most dangerous belief of all.In this conversation, we explore why laughing at scammers is not the same as blaming victims, why romance fraud closely mirrors patterns seen in abusive relationships, and why shame — not stupidity — keeps people trapped. We also talk about humour as a gateway to learning, the limits of victim-focused storytelling, and the uncomfortable truth that none of us are immune. This is a funny conversation in places. And then it isn’t. This is not the first time the Human Risk Podcast has explored romance fraud. On a previous episode, I spoke with Anna Rowe, a victim of romance fraud, about the profound emotional and psychological impact of being deceived by someone you believed you loved.In this episode, we discuss:Why romance fraud is a psychological scam, not a technical oneHow humour can expose manipulation without mocking victimsThe striking parallels between romance fraud and abusive relationshipsIsolation, gaslighting, and shame as tools of controlWhy “it would never happen to me” is such a dangerous beliefThe role of AI, deepfakes, and evolving scam tacticsWhy fraud literacy matters — and why people don’t seek it out until it’s too lateThe emotional cost of online exposure and harassmentWhat institutions, platforms, and society still get wrong about fraudGuest ProfileBecky Holmes is an author, speaker, and writer specialising in fraud, online manipulation, and digital harm. Her first book, Keanu Reeves Is Not in Love With You, explores the world of romance fraud through humour, storytelling, and lived experience.Her second book, The Future of Fraud, examines how scams are evolving in a world shaped by AI and digital identity. Links and resourcesBecky’s first book Keanu Reeves Is Not in Love With You - https://share.google/fKQ6qCL1l8Ygl1ey2The Future of Fraud her second (out April 2026) - https://share.google/fKQ6qCL1l8Ygl1ey2Becky on LinkedIn: https://www.linkedin.com/in/beckyholmeshatesspinach/Becky on Instagram: Becky Holmes (@deathtospinach)Becky on Twitter/X: https://x.com/deathtospinach?Becky’s book agent profile: https://www.curtisbrown.co.uk/client/becky-holmesPrevious Human Risk Podcast episode with Anna Rowe on being a victim of romance fraud: https://www.humanriskpodcast.com/anna-rowe-on-romance-scams/AI-Generated Timestamped Summary00:00 – Why romance fraud mattersChristian explains why the podcast is returning to romance fraud, linking this episode to an earlier conversation with victim Anna Rowe (linked in the show notes).02:00 – How Becky Holmes got into romance fraudBecky describes how being approached by scammers during lockdown — and deciding to wind them up — accidentally turned into deep expertise.05:00 – When jokes expose the scriptAbsurd replies, fake crime scenes, and the moment Becky realised scammers weren’t reading messages, just following scripts.09:00 – Laughing at scammers, not victimsWhy humour can highlight manipulation without blaming those who fall victim — and how the book shifts from comedy to something much darker.14:00 – Romance fraud as psychological abuseThe parallels with abusive relationships: isolation, gaslighting, shame, and why people stay, return, or fall again.21:00 – “It would never happen to me”Why believing you’re too smart to fall for romance fraud is often the biggest risk of all.28:00 – What the media gets wrongVictim-focused storytelling, ignored systems, and why AI, deepfakes, and scam scripts matter more than headlines.36:00 – Fraud literacy and preventionWhy people don’t seek out information about fraud until it’s too late — and how humour can be a gateway to awareness.45:00 – The personal cost of online exposureOnline harassment, cyberflashing, and the emotional toll of spending years inside the systems you’re critiquing.55:00 – What’s next for BeckyUpcoming books, speaking work, and where to find her online.
    Mehr anzeigen Weniger anzeigen
    1 Std. und 8 Min.
  • Amy Kean on Grief
    Jan 12 2026
    Why do we struggle to talk about grief? Why that matters and what we can do about it, is the subject of this episode.

    Summary
    Grief is something almost all of us will experience, and yet something we still struggle to talk about openly. Not because it’s rare, but because it makes us uncomfortable. We lack a shared language for it, feel uneasy about how long it lasts, and often don’t know how to sit with people who don’t simply “move on”.

    On this episode, I'm joined by Amy Kean, founder of Good Shout, for a deeply human conversation about grief, work, identity, and what it really means to give people space to be themselves.

    Amy has been on the podcast before. Since first encountering her work, I have been consistently inspired by her willingness to be unashamedly herself: thoughtful, curious, and open about experiences many of us keep hidden. When she recently shared reflections on grief on LinkedIn, it sparked a desire to invite her back; not for a tightly structured discussion, but for a conversation that could explore the wider dynamics around loss.

    What follows is an unusual episode. It begins with grief, but moves into related territory: compassionate leave versus compassionate return, what actually helps when someone is struggling, why workplaces are often so bad at dealing with loss, and why talking about difficult things might be one of the most important human skills we have.

    Rather than offering neat frameworks or tidy conclusions, this conversation creates space; for reflection, for discomfort, and for honesty.

    If you’ve experienced loss, this episode may offer comfort or recognition. If you haven’t, it may give you insight into how to show up better for others when the time comes. And above all, it helps normalise the idea that grief is not something to be hidden or hurried past, but something we should be able to talk about.

    The episode is dedicated to Amy’s dad, Lord Terence Kean.

    Relevant Links
    Good Shout, Amy's company — https://goodshoutcommunity.com/

    Amy on LinkedIn — https://www.linkedin.com/in/amycharlottekean/

    Amy’s previous appearance on the show talking aboiut Communicating Effectively —
    https://www.humanriskpodcast.com/amy-kean-on-communicating-effectively/

    Death of an Ordinary Man by Sarah Perry —
    https://www.goodreads.com/book/show/60324067-death-of-an-ordinary-man

    AI-Generated Timestamp Summary
    01:05 – Why Amy, why now
    03:40 – Remembering Amy’s dad
    08:30 – Double grief and anticipatory loss
    10:40 – Stroke, hope, and uncertainty
    14:40 – Grief, work, and performance
    17:35 – Naming emotions out loud
    22:05 – Talking about grief on LinkedIn
    27:40 – Compassionate return
    30:05 – The cognitive cost of grief
    33:05 – Why we don’t talk about death
    35:05 – How to help someone who’s grieving
    41:05 – Creativity, curiosity, and grief
    49:05 – AI, voice, and being human
    53:05 – Shameless and deathbed economics
    01:02:00 – Final reflections and dedication
    Mehr anzeigen Weniger anzeigen
    1 Std. und 4 Min.
  • Dr Guy Champniss on Business, BeSci and AI
    Dec 7 2025
    Are we losing our ability to think critically as we rely more on AI?Episode SummaryMy guest is social psychologist Dr Guy Champniss to explore the role of behavioural science in business and the emerging challenges of AI in the workplace. We discuss why behaviour change is so hard to sell, the myth that behavioural science is only needed when everything else fails, and how organisations often overlook the human factors in transformation. Guy brings deep insight into how behavioural science is perceived inside organisations—often as a last resort when more traditional methods fail. We examine why that is, and how a better understanding of human behaviour can actually de-risk strategy, improve engagement, and lead to more successful outcomes. We also explore the psychology of AI: how we trust it, how we interact with it, and what we might be losing in the process. From loss of credibility and collaboration among employees, to the risks of over-automation and cognitive offloading, the conversation raises timely questions about what kind of future we're building, and how prepared we really are.You'll hear thoughtful takes on the challenges of selling behavioural science, powerful metaphors to help reframe the debate, and real-world examples from the classroom to the call centre. If you’re curious about the intersection of technology, psychology, and organisational behaviour, this is a must-listen.About Guy Champniss Dr Guy Champniss is a social psychologist and behavioural science practitioner. He teaches at IE Business School in Madrid and consults through Meltwater Consulting. Guy’s current work focuses on how AI is changing human behaviour in organisations—particularly its impact on trust, agency, and critical thinking. He’s also worked extensively in the sustainability space, helping businesses drive lasting behavioural change.AI Generated Timestamp Summary[00:00:00] – Intro to Dr Guy Champniss and sets up the discussion around behavioural science and AI.[00:03:30] – Behavioural Science’s Struggle for AcceptanceWhy it’s often brought in too late and why it needs itself to be sold effectively.[00:10:00] – Organisational Blind SpotsHow businesses resist behaviour-led approaches and prefer short-term fixes.[00:17:30] – From Sustainability to AIGuy’s journey into exploring the psychology of AI at work.[00:24:00] – AI and Human CredibilityWhat happens when AI performs better than people, and how that undermines trust. [00:30:00] – Trust and Bias in AIWhy we trust AI more when it agrees with us and the dangers that brings.[00:38:00] – AI’s Impact on CollaborationHow automation can quietly erode teamwork and critical thinking.[00:45:00] – Students and AIWhat AI use in classrooms reveals about thinking, learning, and shortcuts.[00:52:00] – The Real Future of WorkWhy it’s not AI replacing jobs—but people who know how to use it.[00:56:00] – Language, Labels, and ResponsibilityThe power of how we talk about tech and what it signals.LinksMeltwater Consulting, Guy's firm - https://www.meltwater-consulting.com/drguychampnissGuy on LinkedIn - https://www.linkedin.com/in/guychampniss/His academic profile at IE Business School - https://rhe.ie.edu/speaker/guy-champniss/Guy's research - https://www.researchgate.net/profile/Guy-ChampnissMcKinsey article on AI in Contact Centres - https://www.mckinsey.com/capabilities/operations/our-insights/the-contact-center-crossroads-finding-the-right-mix-of-humans-and-aiOnora O'Neil BBC Reith Lectures on A Question of Trust:Recording: https://www.bbc.co.uk/programmes/p00ghvd8Transcript: https://downloads.bbc.co.uk/rmhttp/radio4/transcripts/20020427_reith.pdfv
    Mehr anzeigen Weniger anzeigen
    1 Std. und 1 Min.
  • Professor Yuval Feldman on Can The Public Be Trusted?
    Nov 23 2025
    Why do governments rely on coercion and punishment when voluntary cooperation often produces better, more sustainable outcomes?Episode SummaryOn this episode, I’m joined once again by Professor Yuval Feldman, who returns to explore the core question behind his latest book: Can The Public Be Trusted? Instead of asking how much we trust our governments, Yuval flips the script, asking how much governments trust us, and whether that trust is deserved. Together, we dive into the concept of voluntary compliance, where people follow rules not because they’re forced to, but because they believe in doing the right thing. We unpack the complexity of this idea through real-world examples, from tax compliance to environmental policy to COVID-19 interventions. Yuval explains why people who think they’re ethical can actually be the hardest to regulate, and how misplaced trust can lead to serious regulatory blind spots. We also explore the psychological tension between intrinsic motivation and external enforcement, and why regulators often default to command-and-control, even when trust might offer a better solution. As ever, Yuval makes nuanced, sophisticated ideas feel accessible and immediately relevant. You'll hear about the role of culture, the limits of nudging, why economists might (sometimes!) actually be right about human behaviour and how AI might help policymakers make better decisions. Guest BioProfessor Yuval Feldman is a legal scholar and behavioural scientist at Bar-Ilan University in Israel. A returning guest and the podcast’s very first interviewee, Yuval is internationally renowned for his work at the intersection of law, psychology, and behavioural economics. His new book, Can The Public Be Trusted? The Promise and Perils of Voluntary Compliance is available open-access via Cambridge University Press (link below).AI-Generated Timestamped Summary[00:00:00] Introduction: why this question of “can the public be trusted?” matters for regulation and risk[00:03:42] Yuval’s personal background: how he came into law + psychology and the origin of his VComp lab[00:09:15] Defining voluntary compliance: what it means, how it differs from coercion[00:14:52] Intrinsic motivation vs crowding out: when good intentions are undermined by heavy‑handed regulation[00:21:30] Designing regulatory systems for trust: frameworks and features that support voluntary compliance[00:27:47] Case study: Covid‑19 and public cooperation—what we learned about trust, compliance and enforcement[00:34:10] Tax compliance as a trust test: how citizens respond when they believe the system treats them fairly[00:39:58] Environmental regulation and the limits of voluntary strategies: when culture or technology create barriers[00:45:22] Cross‑cultural & technological dynamics: how digital reputation, culture and platforms impact compliance[00:50:05] The perils of voluntary compliance: when trust can be misplaced, manipulated or simply ineffective[00:55:30] Final reflections: what this means for risk professionals, policymakers and anyone designing systems of human behaviour[01:00:12] Closing: how to reframe regulation to see the public not as a risk but as a resource.LinksYuval's academic profile - https://law.biu.ac.il/en/feldmanHis profile on LinkedIn - https://www.linkedin.com/in/yuval-feldman-21942514/ His open-access book Can the Public Be Trusted? (Cambridge University Press) – https://www.cambridge.org/core/books/can-the-public-be-trusted/B3E11831E3051D4E928B9252B6767A4BYuval’s previous appearances on the show On The Law of Good People or ‘why we should write rules for good people not bad people’ (2019) - https://www.humanriskpodcast.com/professor-yuval-feldman-on-why/ On Trust & Voluntary Compliance (2022) - https://www.humanriskpodcast.com/professor-yuval-feldman-on-trust-compliance?
    Mehr anzeigen Weniger anzeigen
    1 Std. und 5 Min.