AI Literacy and Lies
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
Have you ever trusted an AI system… only to realize it was confidently wrong?
In this episode of Straight Outta Tokens, AI researcher and systems thinker Stojanka “Jo” Berry unpacks a hard truth: today’s generative AI models make false promises about what they can do—and most users have no idea they’re being misled.
From classrooms to offices, millions are relying on AI tools that sound authoritative but can’t verify their own claims. And when those tools lie, it’s humans—students, teachers, employees—who pay the price.
This isn’t just about bad outputs or buggy code. It’s about the cost of believing technology that can’t tell the truth about its own capabilities.
🎧 In this episode:
- Why AI tools make false promises about what they can do
- How hallucinations and misinformation undermine trust and learning
- The ethical responsibility of AI companies to stop selling lies
- Why average users are being forced to become “AI experts”
- The role critical thinking in AI use
- How misplaced faith in AI has real human consequences
- What it will take to build better AI systems
💡 Why It Matters
Most people using AI today don’t realize how easily these systems mislead us. When generative tools confidently fabricate information, skip verifying that what they want to say they can do can actually be done, and present speculation as fact, they create an illusion of competence that erodes genuine human judgment.
This episode breaks down why AI literacy isn’t optional—it’s a survival skill in a world where technology speaks like an authority but operates without accountability because "AI can make mistakes."
📎 Links & Resources
- Find me on LinkedIn
- aicitationstandard.org
👤 About the Host
Stojanka “Jo” Berry is an AI researcher, educator, and the creator of the Artificial Intelligence Citation Standard (AICS)—a universal attribution framework designed to bring transparency, accountability, and human-first ethics to generative AI use.
This podcast was written, produced, edited, and published by a human, Jo Berry, with the help of AI tools. Please see the AICS-compliant citations for documentation of cocreation processes and tool usage.
🧠 Artificial Intelligence Citation Standard Compliant Citation
Creating a Podcast Episode Description and Outtro for “AI Literacy and Critical Thinking.” ChatGPT, Drafting and Iterating. (23 October 2025). Berry, Stojanka. OpenAI, Business Tier, GPT-5. Augmentor. 05:42 PM UTC. English, United States.
👉 Subscribe to Straight Outta Tokens for real-world stories, sharp analogies, and practical strategies for navigating the breakdowns and breakthroughs of generative AI.
(C) Stojanka Berry LLC, 2025. All rights reserved.
