Joe Braidwood (GLACIS AI): The Exit No One Celebrates Titelbild

Joe Braidwood (GLACIS AI): The Exit No One Celebrates

Joe Braidwood (GLACIS AI): The Exit No One Celebrates

Jetzt kostenlos hören, ohne Abo

Details anzeigen

Über diesen Titel

The product worked.

Joe Braidwood built Yara AI to democratize mental health support. People who couldn't afford therapy. People who didn't know how to ask for help. People awake at 3am with nowhere to turn. The technology delivered.

He shut it down anyway.

Not because it failed. Because LLMs architecturally cannot guarantee 100% safety. They lose character at the edges of long conversations. They're probabilistic, not deterministic. At scale, even a 0.0001% failure rate means real people in real crisis getting the worst possible response at the worst possible moment.

Joe couldn't prove it wouldn't happen. The investors were ready. The mission was pure. The cap table said go.

He stopped.


🎙️ Guest

Joe Braidwood was the first employee at SwiftKey, acquired by Microsoft for $250 million when he was 29. The money made him miserable.

After losing his best friend to brain cancer (who spent his final months becoming the happiest he'd ever been), Joe promised to carry forward something about positivity and purpose. Yara was supposed to be that. When the architecture couldn't guarantee what the mission required, he pivoted to Glaces, building safety infrastructure for the AI systems that will eventually get this right.


🔥 Key Insights

✅ "Almost perfect" has a body count at scale

99.99% sounds incredible. Deploy to a million users and that's still 100 failures. In mental health, those failures cluster around the most vulnerable people in the most desperate moments. The person who needs help most is the one most likely to push the model past its limits.


✅ Incentives pull everyone toward the same compromises

Joe could have raised more money. The product worked. Every signal said keep going. But taking the moral high road is almost impossible when everything pushes the other way: investors wanting growth, competitors cutting corners, your own team's momentum. The question isn't whether you have values. It's whether your values survive contact with a cap table.


✅ The architecture has ceilings, not just bugs

AI models risk losing their "character" at the edges of long conversations. Safety instructions get pushed out of context. The model forgets who it's supposed to be. This isn't something you patch. It's how the technology currently works. Not experiencing bugs when testing only proves it hasn’t happened yet.


✅ Better guardrails make humans worse

The more reliable your systems, the less responsibility people take. At 99.9%, we're catastrophically bad at handling the 0.1%. We stop paying attention. We assume something will catch us. The guardrail becomes the danger.


✅ You can't lead from permanent fight-or-flight

Joe points to Dario Amodei, running the most consequential AI company while structuring his days around thinking, reading, writing. If he can slow down while navigating existential risk, what's stopping you?

▶️ Listen now

Joe's not saying AI therapy is impossible. He's saying we're not there yet, and pretending otherwise has costs we're not willing to name.

Noch keine Rezensionen vorhanden