The AI Con Titelbild

The AI Con

How to Fight Big Tech’s Hype and Create the Future We Want

Reinhören
0,00 € - kostenlos hören
Aktiviere das kostenlose Probeabo mit der Option, monatlich flexibel zu pausieren oder zu kündigen.
Nach dem Probemonat bekommst du eine vielfältige Auswahl an Hörbüchern, Kinderhörspielen und Original Podcasts für 9,95 € pro Monat.
Wähle monatlich einen Titel aus dem Gesamtkatalog und behalte ihn.

The AI Con

Von: Emily M. Bender, Alex Hanna
Gesprochen von: Jade Wheeler
0,00 € - kostenlos hören

9,95 € pro Monat nach 30 Tagen. Monatlich kündbar.

Für 25,95 € kaufen

Für 25,95 € kaufen

Über diesen Titel

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.

Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything?

The answer to these questions, linguist Emily M. Bender and sociologist Alex Hanna make clear, is “no,” “they wish,” “LOL,” and “definitely not.” This kind of thinking is a symptom of a phenomenon known as “AI hype.” Hype looks and smells fishy: It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines. In The AI Con, Bender and Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms.

Bender and Hanna show you how to spot AI hype, how to deconstruct it, and how to expose the power grabs it aims to hide. Armed with these tools, you will be prepared to push back against AI hype at work, as a consumer in the marketplace, as a skeptical newsreader, and as a citizen holding policymakers to account. Together, Bender and Hanna expose AI hype for what it is: a mask for Big Tech’s drive for profit, with little concern for who it affects.

©2025 Emily M. Bender and Alex Hanna (P)2025 HarperCollins Publishers
Arbeitsplatz- & Organisationsverhalten Informatik

Diese Titel könnten dich auch interessieren

Empire of AI Titelbild
AI Snake Oil Titelbild
Careless People Titelbild
The Thinking Machine Titelbild
The Fort Bragg Cartel Titelbild
If Anyone Builds It, Everyone Dies Titelbild
Abundance Titelbild
AI Engineering Titelbild
House of Huawei Titelbild
The Optimist Titelbild
Klasse Titelbild
Air-Borne Titelbild
Original Sin Titelbild
Game over - Der Fall der Credit Suisse Titelbild
Superagency Titelbild
Human Compatible Titelbild
Alle Sterne
Am relevantesten
Solid book when you're interested in current AI harms.
But when it comes to key questions about how AI capabilities and impacts are likely to progress, I found the book highly one-sided and superficial.
There's definitely quite some overhyped AI claims going around, and the authors are right to call out when a tech company makes big claims and then underdelivers, potentially causing more harm than good in the progress.
However, it's clearly not only hype. AI has become significantly more powerful and useful in the last few years, and diffused widely, bringing both benefits and harms. But the authors only see one side of the coin.

Key views of the authors rest on the assumption that AI capabilities are surely going to stagnate, and the "AI Hype" bubble is going to pop anytime soon.
Where does their confidence about these predictions come from? I was not able to find any arguments in the book. Mostly, it seems to just be based on vibes, combined with a confused notion of intelligence. (Essentially, the authors seem to suggest that there's no meaningful sense in which one entity could be considered more intelligent than another, which is about as nonsensical as it sounds.
For a more elaborate response to these kinds of intelligence denialism claims (in the context of AI), I can highly recommend the paper "On the Impossibility of Supersized Machines".)

The worst part of the book is their insistence that worries about AI catastrophic/existential risks are unfounded and even a harmful "distraction". They call such worries of superintelligent general AI "science fiction" and speculative, but they fail to recognize that claims about AI stagnating soon are at least as speculative. If you attach the implied probabilities predictions, arguably the claims of stagnation are significantly more speculative:
"Doomers" typically only claim that there's a significant risk of AI catastrophe (e.g. 10% or more), and that we should prepare for such scenarios, but the authors express a view of near certainty that AI will not have such capabilities in our lifetimes, and thus can be safely ignored.
This overconfidence is unfounded, and frankly, reckless and irresponsible.

Very one-sided

Ein Fehler ist aufgetreten. Bitte versuche es in ein paar Minuten noch einmal.