Machine Learning Street Talk (MLST) Titelbild

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Von: Machine Learning Street Talk (MLST)
Jetzt kostenlos hören, ohne Abo

Über diesen Titel

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).Machine Learning Street Talk (MLST)
  • Evolution "Doesn't Need" Mutation - Blaise Agüera y Arcas
    Feb 16 2026

    What if life itself is just a really sophisticated computer program that wrote itself into existence?


    In this mind-bending talk, *Blaise Agüera y Arcas* takes us on a journey from random noise to the emergence of life, using nothing but simple code and a whole lot of patience. His artificial life experiment, cheekily named "BFF" (the first two letters stand for "Brainf***"), demonstrates something remarkable: when you let random strings of code interact millions of times, complex self-replicating programs spontaneously emerge from pure chaos.


    *Key Insights from this Talk:*


    *The "Artificial Kidney" Test for Life* — What makes something alive isn't what it's made of, but what it *does*. A rock broken in half gives you two rocks. A kidney broken in half gives you a broken kidney. Function is what separates the living from the non-living.


    *Von Neumann Called It* — Before we even knew what DNA looked like, mathematician John von Neumann figured out exactly what life needed to copy itself: instructions, a constructor to follow them, and a way to copy those instructions. He basically predicted molecular biology from pure logic.


    *The Magic Moment* — Watch as Blaise shows the exact instant when his simulation transitions from random noise to organized, self-replicating code. It's a genuine phase transition, like water freezing into ice, except instead of ice, you get *life*.


    *Evolution Without Mutation* — Here's the twist that challenges everything you learned in biology class: this complexity emerges even when mutation is set to zero. The secret? Symbiogenesis. Things don't just mutate to get better; they *merge*. Two simple replicators that work well together fuse into something more complex.


    *We're All Made of Viruses* — This isn't just simulation theory. In the real world, the mammalian placenta came from an ancient virus. A gene essential for forming memories? Also a virus. Life has been merging and absorbing other life forms all the way down.


    The implications are profound: life isn't just computational, it was computational from the very beginning. And intelligence? That's just what happens when these biological computers start modeling each other.


    Whether you're into artificial life, evolutionary biology, or just want to understand what makes you *you*, this talk will fundamentally change how you think about the boundary between living and non-living matter.


    ---

    TIMESTAMPS:

    00:00:00 Introduction: From Noise to Programs & ALife History

    00:03:15 Defining Life: Function as the "Spirit"

    00:05:45 Von Neumann's Insight: Life is Embodied Computation

    00:09:15 Physics of Computation: Irreversibility & Fallacies

    00:15:00 The BFF Experiment: Spontaneous Generation of Code

    00:23:45 The Mystery: Complexity Growth Without Mutation

    00:27:00 Symbiogenesis: The Engine of Novelty

    00:33:15 Mathematical Proof: Blocking Symbiosis Stops Life

    00:40:15 Evolutionary Implications: It's Symbiogenesis All The Way Down

    00:44:30 Intelligence as Modeling Others

    00:46:49 Q&A: Levels of Abstraction & Definitions


    ---

    REFERENCES:

    Paper:

    [00:01:16] Open Problems in Artificial Life

    https://direct.mit.edu/artl/article/6/4/363/2354/Open-Problems-in-Artificial-Life

    [00:09:30] When does a physical system compute?

    https://arxiv.org/abs/1309.7979

    [00:15:00] Computational Life

    https://arxiv.org/abs/2406.19108

    [00:27:30] On the Origin of Mitosing Cells

    https://pubmed.ncbi.nlm.nih.gov/11541392/

    [00:42:00] The Major Evolutionary Transitions

    https://www.nature.com/articles/374227a0

    [00:44:00] The ARC gene

    https://www.nih.gov/news-events/news-releases/memory-gene-goes-viral

    Person:

    [00:05:45] Alan Turing

    https://plato.stanford.edu/entries/turing/

    [00:07:30] John von Neumann

    https://en.wikipedia.org/wiki/John_von_Neumann

    [00:11:15] Hector Zenil

    https://hectorzenil.net/

    [00:12:00] Robert Sapolsky

    https://profiles.stanford.edu/robert-sapolsky


    ---

    LINKS:

    RESCRIPT: https://app.rescript.info/public/share/ff7gb6HpezOR3DF-gr9-rCoMFzzEgUjLQK6voV5XVWY

    Mehr anzeigen Weniger anzeigen
    56 Min.
  • VAEs Are Energy-Based Models? [Dr. Jeff Beck]
    Jan 25 2026

    What makes something truly *intelligent?* Is a rock an agent? Could a perfect simulation of your brain actually *be* you? In this fascinating conversation, Dr. Jeff Beck takes us on a journey through the philosophical and technical foundations of agency, intelligence, and the future of AI.


    Jeff doesn't hold back on the big questions. He argues that from a purely mathematical perspective, there's no structural difference between an agent and a rock – both execute policies that map inputs to outputs. The real distinction lies in *sophistication* – how complex are the internal computations? Does the system engage in planning and counterfactual reasoning, or is it just a lookup table that happens to give the right answers?


    *Key topics explored in this conversation:*


    *The Black Box Problem of Agency* – How can we tell if something is truly planning versus just executing a pre-computed response? Jeff explains why this question is nearly impossible to answer from the outside, and why the best we can do is ask which model gives us the simplest explanation.


    *Energy-Based Models Explained* – A masterclass on how EBMs differ from standard neural networks. The key insight: traditional networks only optimize weights, while energy-based models optimize *both* weights and internal states – a subtle but profound distinction that connects to Bayesian inference.


    *Why Your Brain Might Have Evolved from Your Nose* – One of the most surprising moments in the conversation. Jeff proposes that the complex, non-smooth nature of olfactory space may have driven the evolution of our associative cortex and planning abilities.


    *The JEPA Revolution* – A deep dive into Yann LeCun's Joint Embedding Prediction Architecture and why learning in latent space (rather than predicting every pixel) might be the key to more robust AI representations.


    *AI Safety Without Skynet Fears* – Jeff takes a refreshingly grounded stance on AI risk. He's less worried about rogue superintelligences and more concerned about humans becoming "reward function selectors" – couch potatoes who just approve or reject AI outputs. His proposed solution? Use inverse reinforcement learning to derive AI goals from observed human behavior, then make *small* perturbations rather than naive commands like "end world hunger."


    Whether you're interested in the philosophy of mind, the technical details of modern machine learning, or just want to understand what makes intelligence *tick,* this conversation delivers insights you won't find anywhere else.


    ---

    TIMESTAMPS:

    00:00:00 Geometric Deep Learning & Physical Symmetries

    00:00:56 Defining Agency: From Rocks to Planning

    00:05:25 The Black Box Problem & Counterfactuals

    00:08:45 Simulated Agency vs. Physical Reality

    00:12:55 Energy-Based Models & Test-Time Training

    00:17:30 Bayesian Inference & Free Energy

    00:20:07 JEPA, Latent Space, & Non-Contrastive Learning

    00:27:07 Evolution of Intelligence & Modular Brains

    00:34:00 Scientific Discovery & Automated Experimentation

    00:38:04 AI Safety, Enfeeblement & The Future of Work


    ---

    REFERENCES:

    Concept:

    [00:00:58] Free Energy Principle (FEP)

    https://en.wikipedia.org/wiki/Free_energy_principle

    [00:06:00] Monte Carlo Tree Search

    https://en.wikipedia.org/wiki/Monte_Carlo_tree_search

    Book:

    [00:09:00] The Intentional Stance

    https://mitpress.mit.edu/9780262540537/the-intentional-stance/

    Paper:

    [00:13:00] A Tutorial on Energy-Based Learning (LeCun 2006)

    http://yann.lecun.com/exdb/publis/pdf/lecun-06.pdf

    [00:15:00] Auto-Encoding Variational Bayes (VAE)

    https://arxiv.org/abs/1312.6114

    [00:20:15] JEPA (Joint Embedding Prediction Architecture)

    https://openreview.net/forum?id=BZ5a1r-kVsf

    [00:22:30] The Wake-Sleep Algorithm

    https://www.cs.toronto.edu/~hinton/absps/ws.pdf


    ---

    RESCRIPT:

    https://app.rescript.info/public/share/DJlSbJ_Qx080q315tWaqMWn3PixCQsOcM4Kf1IW9_Eo

    PDF:

    https://app.rescript.info/api/public/sessions/0efec296b9b6e905/pdf

    Mehr anzeigen Weniger anzeigen
    47 Min.
  • Abstraction & Idealization: AI's Plato Problem [Mazviita Chirimuuta]
    Jan 23 2026

    Professor Mazviita Chirimuuta joins us for a fascinating deep dive into the philosophy of neuroscience and what it really means to understand the mind.*What can neuroscience actually tell us about how the mind works?* In this thought-provoking conversation, we explore the hidden assumptions behind computational theories of the brain, the limits of scientific abstraction, and why the question of machine consciousness might be more complicated than AI researchers assume.Mazviita, author of *The Brain Abstracted,* brings a unique perspective shaped by her background in both neuroscience research and philosophy. She challenges us to think critically about the metaphors we use to understand cognition — from the reflex theory of the late 19th century to today's dominant view of the brain as a computer.*Key topics explored:**The problem of oversimplification* — Why scientific models necessarily leave things out, and how this can sometimes lead entire fields astray. The cautionary tale of reflex theory shows how elegant explanations can blind us to biological complexity.*Is the brain really a computer?* — Mazviita unpacks the philosophical assumptions behind computational neuroscience and asks: if we can model anything computationally, what makes brains special? The answer might challenge everything you thought you knew about AI.*Haptic realism* — A fresh way of thinking about scientific knowledge that emphasizes interaction over passive observation. Knowledge isn't about reading the "source code of the universe" — it's something we actively construct through engagement with the world.*Why embodiment matters for understanding* — Can a disembodied language model truly understand? Mazviita makes a compelling case that human cognition is deeply entangled with our sensory-motor engagement and biological existence in ways that can't simply be abstracted away.*Technology and human finitude* — Drawing on Heidegger, we discuss how the dream of transcending our physical limitations through technology might reflect a fundamental misunderstanding of what it means to be a knower.This conversation is essential viewing for anyone interested in AI, consciousness, philosophy of mind, or the future of cognitive science. Whether you're skeptical of strong AI claims or a true believer in machine consciousness, Mazviita's careful philosophical analysis will give you new tools for thinking through these profound questions.---TIMESTAMPS:00:00:00 The Problem of Generalizing Neuroscience00:02:51 Abstraction vs. Idealization: The "Kaleidoscope"00:05:39 Platonism in AI: Discovering or Inventing Patterns?00:09:42 When Simplification Fails: The Reflex Theory00:12:23 Behaviorism and the "Black Box" Trap00:14:20 Haptic Realism: Knowledge Through Interaction00:20:23 Is Nature Protean? The Myth of Converging Truth00:23:23 The Computational Theory of Mind: A Useful Fiction?00:27:25 Biological Constraints: Why Brains Aren't Just Neural Nets00:31:01 Agency, Distal Causes, and Dennett's Stances00:37:13 Searle's Challenge: Causal Powers and Understanding00:41:58 Heidegger's Warning & The Experiment on Children---REFERENCES:Book:[00:01:28] The Brain Abstractedhttps://mitpress.mit.edu/9780262548045/the-brain-abstracted/[00:11:05] The Integrated Action of the Nervous Systemhttps://www.amazon.sg/integrative-action-nervous-system/dp/9354179029[00:18:15] The Quest for Certainty (Dewey)https://www.amazon.com/Quest-Certainty-Relation-Knowledge-Lectures/dp/0399501916[00:19:45] Realism for Realistic People (Chang)https://www.cambridge.org/core/books/realism-for-realistic-people/ACC93A7F03B15AA4D6F3A466E3FC5AB7---RESCRIPT:https://app.rescript.info/public/share/A6cZ1TY35p8ORMmYCWNBI0no9ChU3-Kx7dPXGJURvZ0PDF Transcript:https://app.rescript.info/api/public/sessions/0fb7767e066cf712/pdf

    Mehr anzeigen Weniger anzeigen
    54 Min.
Noch keine Rezensionen vorhanden