Artificial Thought Podcast Titelbild

Artificial Thought Podcast

Artificial Thought Podcast

Von: Elina Halonen
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

A behavioural science look at how AI changes the way we decide, act, and make sense of the world.

artificialthought.substack.comElina Halonen
Wissenschaft
  • Ep. 15: How AI tools erode critical thinking through cognitive offloading
    Jul 18 2025

    Most people don’t think about their thinking tools. When AI systems offer explanations, solve problems, or generate decisions, the convenience is obvious but there’s a cognitive cost: the more we offload reasoning to machines, the less we engage in it ourselves.

    As always, there are two sides to everything - this is in alternative view to last week’s episode on the positive aspects of expanding our minds with Generative AI.

    Key points

    * Frequent use of AI tools is associated with lower critical thinking skills through a mechanism called cognitive offloading - delegating reasoning tasks to external systems

    * Trust in AI increases the likelihood of offloading and reinforces the habit, while education mitigates the effect - but only when it fosters active cognitive engagement.

    * Over time, offloading reduces cognitive resilience and weakens independent judgement.

    Source: Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. (open access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Mehr anzeigen Weniger anzeigen
    12 Min.
  • Ep. 14: Extending minds with generative AI
    Jul 11 2025

    Much of the public conversation around AI centres on its outputs: what it can generate, how well it performs, what tasks it might take over. Those questions often obscure a more foundational shift that AI systems are becoming embedded in how people think - not just as occasional tools, but as part of the cognitive process itself.

    A recent paper by Andy Clark (Nature Communications, May 2025) situates this shift within a broader cognitive history. Clark is best known for the “extended mind” hypothesis, which argues that human thinking routinely spans across brain, body, and environment. In this article, he applies that lens to generative AI, treating it not as a foreign agent but as a new layer in an already distributed system.

    Key points:

    * Human cognition has always relied on external tools; generative AI continues this pattern of extension.

    * The impact of AI depends on how it is integrated into the thinking process - not just on what it can produce.

    * Clark introduces the idea of “extended cognitive hygiene” - a new skillset for navigating AI-supported reasoning.

    Source: Clark, A. (2025). Extending minds with Generative AI. Nature Communications, 16(1), 1-4. (Open Access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Mehr anzeigen Weniger anzeigen
    13 Min.
  • Ep. 13: How GenAI roles shape perceptions of value in human-AI collaboration
    Jun 13 2025

    When we talk about AI collaboration, the question is usually whether AI was used or not. This binary misses something crucial about how humans actually experience working with generative systems. The question is not just about whether AI was involved but also when and how it participated in the creative process.

    A recent study suggests that when AI generates first drafts versus provides feedback fundamentally changes how people perceive their creative contribution. When AI starts the work, humans feel like editors rather than creators, even when doing substantial revision. But when humans start and AI refines, the final output feels both higher quality and more authentically their own. The twist? People expect others to devalue AI-enhanced work, creating a tension between internal pride and external credibility. This isn't just about tools - it's about how role assignment shapes creative ownership and the social meaning of AI collaboration.

    Source: Schecter, A., & Richardson, B. (2025, April). How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-15). (open access)



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit artificialthought.substack.com
    Mehr anzeigen Weniger anzeigen
    12 Min.
Noch keine Rezensionen vorhanden