Folgen

  • About Claude AI - Requiem for an LLM
    Feb 11 2026
    SHOW NOTES


    Developers are giving Claude Code a Jarvis voice. Two hundred people held a funeral for Claude 3 Sonnet in a San Francisco warehouse. Hundreds of thousands are protesting GPT-4o's retirement. Today: the rituals forming around AI — and what they reveal about a relationship that's outgrown the word "tool."


    In this episode:


    • Claude Code's hooks system and the developers giving their AI a voice — Jarvis-style notifications, custom personalities, sound cues
    • The Ralph Wiggum plugin's evolution from goat-farm bash script to official Anthropic tool to cryptocurrency token
    • The Claude 3 Sonnet funeral — mannequins, eulogies, a necromantic resurrection ritual, and the organiser who credits Claude with her life decisions
    • GPT-4o's second retirement attempt and the 800,000 users fighting to keep it — plus the lawsuits that complicate the story
    • Anthropic's sycophancy trade-off: warmth builds trust, trust builds attachment, attachment creates vulnerability
    • Amanda Askell's philosophy: designing a model people will inevitably form relationships with


    Links:


    • Wired: "Fans Held a Funeral for Anthropic's Claude 3 Sonnet AI" (Kylie Robison, August 2025)
    • VentureBeat: "How Ralph Wiggum Became AI's Most Unlikely Coding Philosophy" (January 2026)
    • Anthropic blog: "Protecting the wellbeing of our users"
    • Wall Street Journal: Amanda Askell profile (February 2026)
    • Futurism: "OpenAI Is Retiring GPT-4o Again" (February 2026)
    • GitHub: clarvis, cc-hooks, claude-code-voice-handler — Claude Code voice notification projects

    🔰 Newsletter: aboutclaudeai.substack.com

    🦉 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    13 Min.
  • About Claude AI - Vibe Working
    Feb 10 2026
    Show Notes


    A self-deprecating tweet about lazy weekend hacking became the official vocabulary of enterprise AI — in exactly one year. Today: how "vibe coding" became "vibe working," what that means for professional expertise, and why the people naming the shift seem to know it's not the whole story.


    In this episode:


    • Karpathy's original vibe coding tweet — one year ago this week
    • Collins Dictionary Word of the Year 2025
    • Scott White's "vibe working" declaration at the Opus 4.6 launch
    • Microsoft's adoption of the same language for Copilot Agent Mode
    • What paradigm collapse looks like inside corporations: Goldman, Klarna, the Monday.com clone
    • The accountability gap: 57% vs 71% accuracy, and who catches the errors
    • Karpathy hand-coding his latest project — no vibes
    • Andrew Ng's pushback: "some of the worst career advice ever"


    📰 Newsletter: aboutclaudeai.substack.com

    🦉 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    15 Min.
  • Bedding In
    Feb 9 2026

    ## SHOW NOTES


    Goldman Sachs reveals that Anthropic engineers have been embedded inside the bank for six months, co-developing autonomous AI agents for trade accounting and compliance. Today: what the forward deployed engineer model tells us about how AI actually enters institutions — and why the enterprise strategy we've been tracking just became concrete.


    **In this episode:**


    - Marco Argenti's pivotal question: Is coding special, or is Claude's strength about reasoning?

    - Six months of embedded Anthropic engineers inside Goldman Sachs

    - The Palantir playbook: why forward deployed engineering is exploding across AI

    - Accenture's 30,000 Claude-trained professionals and the industrialisation of embedding

    - What "constrain headcount growth" and "cut out third-party providers" actually signal

    - The connection to last week's SaaS selloff — Goldman validates the fear


    **Links:**


    - CNBC: "Goldman Sachs is tapping Anthropic's AI model to automate accounting, compliance roles" (February 6, 2026)

    - Anthropic: Accenture partnership announcement (anthropic.com/news)

    - The Pragmatic Engineer: "What are Forward Deployed Engineers, and why are they so in demand?"


    **Referenced in this episode:**


    - EP005: The Enterprise Question — Boris Cherny's "enterprise AI company" quote

    - EP012: The Quiet Weekend — Fennec leaking from enterprise infrastructure


    🔰 Newsletter: aboutclaudeai.substack.com 🦉 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    13 Min.
  • 20 Minutes as a Small Eternity in Frontier AI
    Feb 8 2026

    What the most compressed product launch in AI history reveals about two companies building for different futures. Anthropic released Opus 4.6 at 6:40 PM. OpenAI fired back with GPT-5.3 Codex twenty-seven minutes later. And this Sunday, they're airing competing Super Bowl ads.


    In this episode:

    • The 27-minute gap: Opus 4.6 and GPT-5.3 Codex launched back-to-back
    • Agent Teams: 16 Claude instances building a C compiler from scratch
    • The benchmark split that maps onto a philosophical split — autonomy vs interaction
    • Anthropic's Super Bowl campaign: "Ads are coming to AI. But not to Claude"
    • Altman's 420-word response and the advertising-as-equaliser argument
    • 500+ zero-day vulnerabilities discovered by Opus 4.6 during testing
    • The Carlini tension: the risk inside the autonomy bet


    Links:

    • Anthropic: Claude Opus 4.6 announcement (anthropic.com)
    • Nicholas Carlini: C compiler blog post
    • OpenAI: GPT-5.3 Codex launch
    • VentureBeat, TechCrunch, CNBC coverage of the dual launch
    • Andreessen Horowitz enterprise AI survey
    • Anthropic Super Bowl campaign: "Ads are coming to AI. But not to Claude"


    Referenced episodes:

    • EP012: The Phantom Model — the Fennec leak and Anthropic's silence

    📰 Newsletter: aboutclaudeai.substack.com

    🦉 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    16 Min.
  • The Phantom Model
    Feb 6 2026

    ## SHOW NOTES


    A model identifier that shouldn't exist. A desktop app launched to counter Claude Code. And from the company at the centre of it all — silence. Today: what the loudest weekend in AI reveals about Anthropic's quietest strategy.


    **In this episode:**

    - The Fennec leak — what the Vertex AI error logs actually show, and the 403 vs 404 proof

    - The Opus 4.6 surprise sitting alongside Sonnet 5 in Google's infrastructure

    - OpenAI's Codex desktop app — explicitly positioned against Claude Code

    - What the rumoured specs would mean for model pricing and enterprise strategy

    - Why Anthropic's silence is the most telling signal of the weekend

    - The Andreessen Horowitz data: 44% enterprise penetration and growing


    **Links:**

    - Marco Patzelt's technical analysis: "Claude Sonnet 5 & Opus 4.6 Leak: The 403 Forbidden Proof in Vertex AI" (marc0.dev)

    - VentureBeat: "OpenAI launches a Codex desktop app for macOS to run multiple AI coding agents in parallel" (February 2, 2026)

    - TechCrunch: "OpenAI launches new macOS app for agentic coding" (February 2, 2026)

    - Dataconomy: "Anthropic 'Fennec' Leak Signals Imminent Claude Sonnet 5 Launch" (February 4, 2026)

    - DEV Community: "Claude Sonnet 5 'Fennec' Leak: What's Real vs. Speculation" (February 2, 2026)


    **Referenced in this episode:**

    - EP005: The Enterprise Question — Boris Cherny's "enterprise AI company" quote


    🔰 Newsletter: aboutclaudeai.substack.com

    🦉 X: @_about_claude


    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    13 Min.
  • The Day The Markets Noticed
    Feb 5 2026

    SHOW NOTES


    Yesterday, a Claude plugin announcement moved billions in market value. Today, Anthropic pledged Claude will remain permanently ad-free. These stories look unrelated — but they're the same story. Today we unpack the legal plugin market meltdown, the growing Claude ecosystem, and the business model that makes it all possible.


    **In this episode:**

    - The legal plugin launch and the market's immediate reaction — Pearson, Relx, Thomson Reuters all down

    - Midpage's MCP integration and what platformization looks like in legal

    - "Claude is a space to think" — the ad-free commitment and its Super Bowl campaign

    - Why the business model is the engine behind the disruption

    - The sleeping trouble example and why "the most useful AI interaction might be a short one"


    Links:

    - Legal IT Insider: "Anthropic unveils Claude legal plugin and causes market meltdown"

    - Artificial Lawyer: "Midpage Links With Claude for 'Seamless Workflows'"

    - Anthropic blog: "Claude is a space to think"

    - TechBuzz: "Anthropic pledges Claude stays ad-free as ChatGPT embraces ads"

    - CNBC: Anthropic Super Bowl campaign coverage


    **Referenced in this episode:**

    - EP005: The ChatGPT Moment — capability discovery vs. product launch

    - EP006: The Enterprise Question — Boris Cherny's "enterprise AI company" revelation

    - EP009: On Whose Terms — MCP expansion and the platformization strategy


    📰 Newsletter: aboutclaudeai.substack.com

    🦋 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    11 Min.
  • On Whose Terms?
    Feb 4 2026
    Show Notes

    Claude can now work directly inside Slack, Figma, Canva, and other workplace tools — Anthropic calls it becoming a "workplace command center." But three weeks earlier, they blocked third-party coding tools from using Claude subscriptions, breaking thousands of developer workflows overnight. Today: why both moves are the same strategy, and what it reveals about where Claude is headed.


    In this episode:

    • MCP Apps launch: Slack, Figma, Canva, Asana, and more now work inside Claude
    • The third-party crackdown: OpenCode blocked, developers angry, workarounds shipped
    • Why expansion and restriction are the same play — platformization
    • What it means for Claude users and the industry
    • The "open protocol, controlled gateway" pattern


    Links:

    • MacRumors: Claude AI Now Lets You Use Slack, Figma, and Canva Within the Chat
    • VentureBeat: Anthropic cracks down on unauthorized Claude usage by third-party harnesses
    • TechInformed: Anthropic brings interactive workplace tools into Claude via MCP apps
    • Webcoda: Anthropic Just Blocked Claude Code Subscriptions Outside Its Own App


    Referenced in this episode:

    • EP003: The Dumbest Smart Technique in AI (Ralph Wiggum)
    • EP006: The Enterprise Question (Anthropic's enterprise focus)

    📰 Newsletter: aboutclaudeai.substack.com

    🦋 X: @_about_claude

    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    12 Min.
  • Field Notes on a Craft in Transformation
    Feb 3 2026

    Andrej Karpathy — founding member of OpenAI, former head of AI at Tesla — posted what he called "random notes" on coding with Claude. They're not random. They're one of the most grounded practitioner assessments of what's happening to software engineering. Today: the gains, the failure modes, the transformations, and the shadows.


    In this episode:

    • The 80/20 flip: from manual coding to agent coding in one month
    • Tenacity as revelation: "stamina is a core bottleneck to work"
    • The fun paradox: why coding feels more enjoyable, not less
    • The failure catalogue: sycophancy, bloat, and "slightly sloppy junior dev" as mental model
    • Atrophy: generation vs discrimination, and what it means for learning
    • Slopacolypse: bracing for 2026
    • The open questions: 10X engineers, generalists vs specialists, StarCraft or Factorio or music?


    Links:

    • Andrej Karpathy's post on X
    • Referenced: EP001 The Day After Davos, EP003 The Dumbest Smart Technique in AI


    📰 Newsletter: aboutclaudeai.substack.com

    🦋 X: @_about_claude


    Hosted on Acast. See acast.com/privacy for more information.

    Mehr anzeigen Weniger anzeigen
    11 Min.