We’re Racing Toward AI We Can’t Control | For Humanity #79
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.
Together, they explore:
* Why AI extinction risk is real
* Why research alone won’t save us
* The dangers of the AI chip supply chain race
* Job displacement and political blind spots
* Alignment skepticism
* Whether treaties can work
* What gives David hope in 2026
If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.
🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter
📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
