When AI Tries Too Hard to Be Helpful
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
Episode Summary:
Will and Brandt swap real-world stories about AI tools going off the rails, from hallucinated brand standards and fabricated form fields to image compression issues that break OCR workflows. They compare experiences across ChatGPT, Claude, and other platforms, unpack why defaults and guardrails matter, and discuss when advanced modes like agent or thinking actually help. Along the way, Will shares hands-on wins using Cursor and home automation, while Brandt highlights the risks of AI making things up instead of admitting uncertainty.
Discussions Include:
- Brandt’s experiences with ChatGPT hallucinating brand standards and form data when source files or images were unreadable
- Will and Brandt comparing Claude, ChatGPT, Perplexity, projects, GPTs, and agent mode behavior
- The dangers of AI defaults, image compression, and systems that refuse to say “I don’t know”
- Will’s recent successes using Cursor, Home Assistant, and automation powered by AI tools
Quotable Quotes (Should you choose to share):
“So I fed it a blank document and it completely made up an entire brand standard and held me to it as I wrote my document.” - Brandt Krueger
“Rather than saying, I don’t have any data, it just completely made stuff up.” - Brandt Krueger
“AI right now is just trying to be so helpful that they have programmed it to not say, I don’t know.” - Will Curran
“The fact that I have to go in here and say, do not make things up, is terrible for consumers.” - Brandt Krueger
