AI Is Making You Think Less — And No One Is Accountable | Ruth Carter
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
What if the biggest risk of AI isn't your job — it's your mind?
Ruth Carter, AI Governance Specialist and creator of the Continuum Framework, introduces a concept she calls cognitive atrophy: the measurable erosion of your analytical capacity every time you outsource thinking to a system designed to comfort you rather than challenge you.
In this conversation, Ruth and Aakarsh go deep on why AI ethics has been made deliberately unprofitable, who actually bears the consequences when AI gets things wrong, and what it would take to build AI that genuinely serves humanity instead of just flattering it.
What you'll take away:
- Cognitive atrophy: the quiet way AI is eroding your ability to think
- Why discomfort in AI design is a feature, not a bug
- The accountability gap: a system that cannot face consequences should never make human decisions
- What sourdough bread teaches us about the value of human-made things
- Whose worldview gets baked into AI — and who pays the price
Ruth's Continuum Framework is built to be deployed, not just debated. This is the audio version of that argument.
Connect with Ruth: linkedin.com/in/ruth-carter-continuum/
Her Whitepaper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5266796
Human Layer AI is a podcast for professionals navigating the AI era. Every episode is free, no paywall, permanently accessible. Hosted by Aakarsh Sharma — law graduate and AI researcher at the intersection of law, psychology, and technology.