Folgen

  • Cloning the Coach: Friction, Feedback, and the 22% Jump
    Feb 18 2026

    Scott Kern, a veteran AP US History teacher at North Star Academy, didn't enter the AI world looking for a shortcut. Instead, he sought a way to solve the "great sadness" of teaching: the fact that there is only one of him and thirty students who all need a mentor at the exact same moment. By building custom "feedback bots" that mirror his own instructional voice, Scott managed to do what was previously impossible. He scaled his presence, leading to a career-high pass rate on the AP exam.

    In this episode, we dive into the vital distinction between "logistical friction" (the stuff we want to automate) and "academic friction" (the productive struggle where learning actually happens). Scott shares the philosophy behind his school's new "AI Driver’s License" pilot and explains why the first week of an AI literacy course should involve no technology at all. This is a conversation about maintaining agency in an automated world and ensuring that when we "flip the AI switch," it’s to illuminate the path, not to walk it for our students.

    Key Discussion Points:

    • The "Cloned" Educator: How Scott used custom bots to provide 1-on-1 coaching to every student simultaneously, resulting in a 22% increase in AP pass rates.

    • Process Over Product: Moving the grading focus from the final essay to the number of meaningful revisions a student makes alongside an AI coach.

    • The AI Driver’s License: Why North Star Academy is teaching seniors to be "drivers rather than passengers" by focusing on ethos and agency over specific prompting tools.

    • The Historian’s Perspective: Looking at the exponential pace of AI change through the lens of human history and previous technological pivots.

    Mehr anzeigen Weniger anzeigen
    40 Min.
  • The Pocketbook Problem: Why We Need Diverse Architects in the Age of AI
    Feb 4 2026

    If you walked into a high school classroom and saw a teacher running daily stand-ups and communicating via Slack, you might think you’d stumbled into a tech startup. For Ivanna Gutierrez, that blur between education and industry is exactly the point. A former software consultant turned educator, Ivanna experienced a surreal "full circle" moment when she returned to teach at the very high school she graduated from, even finding her own name scribbled in the textbooks. Now, as the Director of High School & Career Related Programs at the Dottie Rose Foundation, she is on a mission to ensure that girls and underrepresented students don't just survive computer science classes, but thrive in them.

    In this episode, we explore the tension between "vibe coding" and rigorous logic, and why knowing why code works matters more than ever in the age of ChatGPT. Ivanna shares how she uses AI not as a cheat code, but as a bridge to build confidence for students who often decide as early as fifth grade that "math isn't for them." From the "pocketbook problem" in car design to the necessity of personal branding, this conversation is a masterclass in moving students from being passive consumers of technology to becoming the active creators of our future.

    Key Discussion Points:

    • The "Pocketbook Problem" in Design: Ivanna uses the lack of storage for purses in cars as a prime example of why we need diverse creators: if you aren't at the table, your needs, and your perspective, aren't in the product.

    • Corporate Realism in the Classroom: Why treating students like employees (using Slack, stand-ups, and "Googling it") prepares them for the workforce better than traditional rote memorization.

    • Bridging the Confidence Gap: Addressing the heartbreaking reality that many girls opt out of STEM by 5th grade, and how mentorship can interrupt that narrative.

    • AI as a "Soundboard," Not a Solution: How to teach students to use generative AI for debugging and brainstorming without sacrificing the development of deep logical thinking skills.

    • Beyond the Code: The critical importance of "soft skills," networking, personal branding, and portfolio building, in an era where technical skills are increasingly automated.

    • Consumer vs. Creator: The vital shift students must make to ensure they are shaping the tools of tomorrow rather than just being shaped by them.

    Mehr anzeigen Weniger anzeigen
    39 Min.
  • Re-Architecting Education for a Pro-Human AI Future with Babak Mostaghimi
    Jan 28 2026

    Join us for an inspiring conversation with Babak Mostaghimi, Founding Partner at LearnerStudio and the former Assistant Superintendent who led Gwinnett County Public Schools' pioneering AI readiness initiative. Babak guides us through the necessary shift from using AI merely to make broken systems faster, to using it as a tool that unlocks human potential. He shares LearnerStudio’s "Three Horizons" model of innovation, explaining why schools are stuck in an industrial past and how we can re-architect them for a future focused on life, career, and democracy.

    We dive into practical strategies, like the difference between "snorkeling" and "scuba diving" in AI literacy, and why we must "Marie Kondo" our curriculum to make space for what truly matters: our shared humanity. From 7th graders using AI to tackle food insecurity to teachers building their own feedback bots, this episode offers a compelling vision for how we can ensure technology serves the human experience rather than replacing it.

    Key Discussion Points:

    • Pro-Human AI: Babak’s argument against using AI solely for efficiency, "Nobody likes the current system. Why are we making it faster?" and the case for using tools to unlock creativity and connection.

    • The Three Horizons Model: A framework for understanding education's evolution from the industrial model (Horizon 1) to the efficiency/equity movement (Horizon 2), and finally to a learner-centered ecosystem (Horizon 3).

    • Marie Kondo-ing the Curriculum: The necessity of clearing out antiquated content standards to create the psychological safety and time for relationship-driven, real-world learning.

    • Snorkeling vs. Scuba Diving: Why AI readiness cannot be a niche magnet program but must be a universal skill set that allows every student to navigate ("swim"), explore ("snorkel"), or deeply master ("scuba dive") the technology.

    • Agency in Action: Real-world examples of students and teachers taking control, including a 7th grader using the Inkwire tool to investigate food insecurity and educators designing bespoke feedback agents with PlayLab.The Three Horizons of Learning: A Conversation with Babak Mostaghimi

    Mehr anzeigen Weniger anzeigen
    42 Min.
  • The Skeptic and The Optimist: Navigating AI in Higher Education
    Jan 8 2026

    Join us for a candid debate between two colleagues who view the future of AI in education through very different lenses. We are joined by Dr. Jason Margolis, an AI skeptic who worries about the atrophy of critical thinking, and Dr. Nicole Schilling, an AI optimist who sees these tools as essential scaffolds for complex problem-solving.

    Together, they model the concept of "Critical Friends," engaging in respectful but challenging dialogue on a polarizing topic. We dive deep into the ethics of the "8-minute dissertation," the tension between efficiency and the learning process, and why we might need flexible guidelines rather than rigid policies in this rapidly changing landscape. Whether you are an educator, a leader, or just someone trying to figure out where the human ends and the machine begins, this conversation offers a roadmap for navigating the grey areas of innovation.

    Key Discussion Points:

    • Skeptic vs. Optimist: Jason’s concern about "outsourcing our brains" versus Nicole’s vision of AI as a partner in social constructionism.

    • The "8-Minute Dissertation": A critical look at what is lost when we prioritize the product (the degree) over the process (the struggle of learning).

    • Ethical AI Use: Examples of high-level use, such as training an AI model to act as a rigorous dissertation committee rather than writing the paper for you.

    • Bias and Power: Addressing the "racist undertones" in algorithms and questioning whose interests are really served by the rapid adoption of AI.

    • Policy vs. Guidelines: Why creating rigid policies for fast-moving tech is often futile, and the argument for developing ethical "guidelines" instead.

    • The Critical Friends Model: How to disagree productively and maintain professional relationships in an era of polarized viewpoints.

    Mehr anzeigen Weniger anzeigen
    42 Min.
  • Redesigning the Syllabus for Deeper Learning: AI, Empathy, and Assessment
    Dec 17 2025

    Join us for an insightful conversation with Dr. Dana Riger, UNC's inaugural Faculty Fellow for Generative AI, as she guides us through the rapid paradigm shift brought on by AI in higher education. Dr. Riger shares her journey from a "fear-driven" assessment redesign, after discovering ChatGPT, to developing a nuanced, values-driven framework for integrating and avoiding AI in the classroom.

    We dive into practical strategies, like redesigning traditional research papers into creative, AI-avoidant multimedia projects, and intentionally integrating AI for skills development, such as using chatbots for practice dialogues on polarizing topics. Dr. Riger also addresses the institutional challenge of avoiding "one-size-fits-all" AI policies and underscores the importance of fostering an open dialogue. Ultimately, this episode offers a compelling vision for the future of teaching, emphasizing that the human educator's unique value lies in fostering empathy, presence, and critical dialogue, not just imparting knowledge.

    Key Discussion Points:

    • The AI Paradigm Shift: Dr. Riger's initial reaction to ChatGPT and her immediate, fear-driven assessment redesign in 2022.

    • The Nuanced Approach: Distinguishing between AI-avoidant (experiential, creative) and AI-integrated (intentional skill-building) assessments.

    • Practical Examples: How a multimedia project replaces a traditional paper, and using AI to practice difficult, emotionally laden conversations.

    • Leading with Collaboration: Why policing AI use is ineffective and the importance of respecting student autonomy and ethical objections.

    • Institutional Guidance: The missteps of mandated, uniform AI policies and the need for a thoughtful "middle ground" approach.

    • The Value of Process: Shifting assessment focus from the final product to the process of learning (drafts, revisions, process logs).

    • The Core Question: What are the unique, human-centered qualities (empathy, presence) that educators must prioritize in the age of AI?

    Mehr anzeigen Weniger anzeigen
    42 Min.
  • Trailblazing AI Literacy: Connor Mulvaney’s Rural Classroom Revolution (Rebroadcast)
    Nov 19 2025
    In this episode from the archives, Montana science teacher and district AI lead Connor Mulvaney joins host Lydia Kumar to share how he turned fishing photos, traffic-light rubrics, and a healthy dose of curiosity into AI leadership in Montana and across the nation. Fresh off announcing aiEDU’s largest Trailblazers Fellowship expansion, Connor shares stories about leading students and educators to responsible AI adoption. In this episode, you’ll learn:
    • Break-the-Ice Questions – Three questions that instantly surface student misconceptions (and enthusiasm) about AI.
    • Fake Fish, Real Ethics – Using deepfake trout to spark serious debate on consent, bias, and digital citizenship.
    • Trailblazers 2.0 – What’s inside the 10-week fellowship (virtual sessions, $875 stipend, national recognition) and why rural teachers asked for it.
    This episode is for K-12 educators, district leaders, and mission-driven education organizations who want to shift AI conversations from fear and plagiarism to possibility and purpose.
    Mehr anzeigen Weniger anzeigen
    40 Min.
  • Danelle Brostrom on Leading AI: Privacy, Humanity, and Progress in Schools
    Nov 12 2025

    K-12 EdTech coach Danelle Brostrom joins us to talk about bringing curiosity, guardrails, and humanity to AI in schools. We dig into what we should learn from the social-media era, how librarians are frontline partners for information literacy, the real risks inside edtech privacy policies (and how districts can negotiate them), and concrete ways AI can expand access, like instant translation, reading-level adjustments, and executive-function supports. If you’re a district leader, principal, or teacher trying to move from paralysis to practical action, this conversation is your on-ramp.

    Key Takeaways
    • Don’t repeat social media’s mistakes. Protect in-person connection; teach students how to spot manipulated media and deepfakes.

    • Librarians = misinformation SWAT team. Pair EdTech with media specialists to teach reverse-image search, corroboration, and bias checks.

    • AI is already in your stack. Inventory tools teachers use; many “non-AI” products now include AI features that touch student data.

    • Equity in action. Real-time translation, leveled texts, and scaffolded task breakdowns can immediately widen access—offer to all students.

    • PD that sticks. Start with low-stakes personal uses (meal plans, resumes), then ethics, then classroom workflows—build a safe space to wrestle.

    • Listen first. Talk to students about how they’re using AI; invite skeptics to the table.

    • Leadership mindset. Curiosity, grace, and progress over perfection.

    Mehr anzeigen Weniger anzeigen
    37 Min.
  • Duke's Ahmed Boutar on AI Alignment: Ensuring Users Get Desired Results
    Nov 5 2025

    In this episode, we’re joined by Ahmed Boutar, an Artificial Intelligence Master’s Student at Duke University, who brings a rigorous engineering focus to the ethics and governance of AI. Ahmed’s work centers on ensuring new technology aligns with human values, including his research on Human-Aligned Hazardous Driving (HAHD) systems for autonomous vehicles.

    This conversation is an urgent exploration of the practical and ethical challenges facing education and industry as AI progresses rapidly. Ahmed provides a critical perspective on how to maintain human judgment and oversight in a world increasingly powered by Large Language Models.

    Key Takeaways
    • The Interpretation Imperative: The most critical role of an educator today is to ensure that students move beyond simply accepting AI output to interpreting it, explaining it, and wrestling with the material in their own words. This is the ultimate guardrail against outsourcing thinking.

    • The Alignment Problem: AI failures often stem from misalignment between the intended goal (outer alignment) and the goal the AI actually optimizes for (inner alignment). The chilling example provided is an AI that solved the objective of "moving the fastest" by designing a tall structure that immediately fell down to maximize speed.

    • Transparency is Governance: For high-stakes decisions like loan applications or hiring, users and regulators must demand transparency into why an AI made a prediction. Responsible development requires diverse perspectives on design teams to prevent innate biases in training data from causing discrimination.

    • Adoption Over Abandonment: As humans, we cannot stop AI's progress. Instead, we must adopt it to augment productivity, while simultaneously creating policy and guardrails that ensure fair and responsible use.

    • A Hope for Scientific Discovery: While concerned about the concentration of AI development in a few large companies, Ahmed remains optimistic about AI's potential in scientific fields like drug discovery and proactively addressing global crises, as seen during the COVID-19 pandemic.

    Mehr anzeigen Weniger anzeigen
    44 Min.