Folgen

  • Killing all the Lawyers
    Feb 10 2026

    Shakespeare's 'let's kill all the lawyers' is commonly misread as anti-lawyer sentiment. The correct interpretation recognises that eliminating lawyers represents a prerequisite for tyranny. This podcast examines that insight through two contemporary phenomena: artificial intelligence's potential to displace legal professionals, and technology oligarchs' systematic efforts to subvert legal constraints on their power

    Mehr anzeigen Weniger anzeigen
    15 Min.
  • Ai Fabrication in the Courtroom
    Oct 6 2025

    The legal profession faces an accelerating governance crisis that should concern every senior leader overseeing regulatory frameworks and public accountability. Over 410 documented cases worldwide reveal lawyers submitting fabricated court citations generated by artificial intelligence, a problem that has exploded from a few incidents monthly to multiple cases daily in 2025

    The implications extend far beyond courtrooms. Client expectations for AI integration nearly doubled between 2024 and 2025, yet only 21 per cent of legal firms report comprehensive adoption frameworks, creating dangerous gaps between technological deployment and professional competence

    Mehr anzeigen Weniger anzeigen
    13 Min.
  • The Role of Experts in the Courtroom
    Oct 6 2025

    Expert witnesses are supposed to serve the court, not the party paying them – but across six major jurisdictions, the rules governing how this actually works differ so dramatically that international litigation has become a procedural minefield.

    As artificial intelligence begins reshaping expert testimony itself, understanding these jurisdictional frameworks has never been more urgent.

    Mehr anzeigen Weniger anzeigen
    18 Min.
  • Ai in the Courtroom
    Oct 6 2025

    Courts worldwide are navigating uncharted waters with artificial intelligence, and their radically different approaches reveal a governance crisis that demands immediate attention from senior leaders. Across eight major jurisdictions, courts have responded to generative AI with starkly contrasting frameworks: New South Wales has imposed categorical prohibitions on AI-generated witness evidence and mandates sworn declarations that AI was not used,¹ whilst Singapore takes a permissive stance requiring no disclosure unless specifically requested, placing full responsibility on individual practitioners.² This fragmentation is not merely academic. Courts in the United States and Australia have already sanctioned lawyers for filing submissions citing entirely fabricated cases generated by AI 'hallucinations', where systems like ChatGPT created plausible-sounding but completely fictitious legal precedents.³ The consequences extend far beyond professional embarrassment to fundamental questions about evidentiary integrity, access to justice for self-represented litigants, and the preservation of confidential information that may be inadvertently fed into public AI systems and become permanently embedded in their training data.⁴ The window for proactive governance is closing rapidly, yet no international consensus has emerged on how to balance innovation with risk management in the administration of justice. New Zealand has pioneered a three-tiered approach with separate guidelines for judges, lawyers and non-lawyers, recognising that different court users face fundamentally different obligations and capabilities,⁵ whilst the United Kingdom has focused exclusively on guidance for judicial officers without addressing practitioner conduct.⁶ For government executives responsible for policy development, regulatory frameworks, and public sector digitalisation, understanding these divergent approaches is not optional. The report exposes critical gaps in current governance models and demonstrates why courts are moving from permissive to restrictive regulation as verification mechanisms struggle to keep pace with technological advancement.⁷ Download the full analysis to understand how these judicial responses should inform your organisation's approach to AI governance, professional liability frameworks, and access to justice initiatives before fragmented regulation creates compliance nightmares across jurisdictions.

    Mehr anzeigen Weniger anzeigen
    14 Min.
  • The Future of AI has arrived early
    Jun 6 2025

    In this episode I review the publication AI 2027 and take a look at where Ai is today, and what that means to you

    Mehr anzeigen Weniger anzeigen
    7 Min.
  • One Big Beautiful Ai Regulation
    May 25 2025

    Donald Trump's "One Big Beautiful Bill" has passed the US House of Representatives. On page 291 this bill proposes a new law on Ai Regulation that has MASSIVE impact in the US, and potentially globally

    Mehr anzeigen Weniger anzeigen
    18 Min.
  • Anatomy of Project Failure
    Dec 26 2024

    Bram Stoker’s character, Professor Van Helsing in Dracula, said “we learn from failure, not from success!”.

    This is a favourite quotation of mine, and with forty years of managing IT projects, I have spent the last ten years studying IT project failure. And I have developed some ‘minority opinions’.

    My interest in why projects fail was sparked by a question attributed to the former Chief Information Officer at the Treasury Board of Canada, who raised his hand at a Standish Group presentation in 1995. The question has come to be known as Cobb’s Paradox:

    ‘We know why projects fail, we know how to prevent their failure – so why do they still fail?’

    Mehr anzeigen Weniger anzeigen
    12 Min.
  • AI Governance in Crisis
    Nov 2 2024

    AI Governance Crisis in Financial Services

    A groundbreaking investigation by the Australian Securities and Investments Commission (ASIC) has revealed an extremely concerning disconnect between the rapid adoption of AI technologies and the maturity of governance frameworks in the financial sector. The comprehensive review of 624 AI use cases across 23 financial institutions exposes an alarming "governance gap" that threatens both market stability and consumer protection. With 61% of financial institutions planning to accelerate their AI deployment in the next 12 months and generative AI adoption soaring to 22% of use cases in development, the urgency for robust oversight has never been greater.

    The findings paint a concerning picture of the industry's readiness to handle AI risks: less than half of the reviewed institutions had policies addressing AI fairness or consumer disclosure requirements. This governance deficit comes at a critical time when financial institutions are rapidly deploying increasingly complex and opaque AI systems that directly impact consumer outcomes. Our latest podcast episode dives deep into these findings, featuring exclusive insights from industry leaders and regulatory experts on how organizations can bridge this dangerous gap before it's too late. Don't miss this essential discussion on one of the most pressing challenges facing the financial sector today.

    #FinancialServices #AIGovernance #RiskManagement #ASIC #FinTech #Leadership #Innovation #RegTech

    Mehr anzeigen Weniger anzeigen
    11 Min.