Folgen

  • Symbolic World Models - Top Piriyakulkij
    Jan 26 2026

    Wasu "Top" Piriyakulkij, PhD student at Cornell University advised by Kevin Ellis, discusses his paper "PoE-World: Compositional World Modeling with Products of Programmatic Experts." The episode explores how symbolic, programmatic world models can achieve strong generalization and sample efficiency by composing many small causal programs instead of learning a single monolithic model.

    The conversation traces how PoE-World emerged from earlier work on active concept learning and hypothesis testing, and how object-centric Atari environments became a natural testbed for scaling symbolic world models beyond grid worlds. Piriyakulkij reflects on design failures, surprising successes, and the moment the learned world model became interactive enough to serve as a real-time simulator.


    In This Episode -

    • Symbolic vs. neural world models

    • Products of programmatic experts

    • Modular causal rules as world models

    • Object-centric Atari environments

    • Montezuma’s Revenge as exploration benchmark

    • Sample-efficient learning from demonstrations

    • Weights as expert confidence signals

    • World models as executable simulators

    • Exploration as program testing


    References -

    • WorldCoder - https://arxiv.org/abs/2402.12275

    • Object-Centric Atari - https://arxiv.org/abs/2306.08649v2

    • ARC-AGI-3 - https://arcprize.org

    • VisualPredicator - https://arxiv.org/abs/2410.23156

    • People: Marvin Minsky, François Chollet, Armando Solar-Lezama


    About the Paper -

    "PoE-World: Compositional World Modeling with Products of Programmatic Experts"

    Authors: Wasu Top Piriyakulkij, Yishou Wang, Hao Tang, Martha Lewis, Kevin Ellis

    The paper introduces a symbolic world modeling framework in which many small, interpretable programs - each encoding a simple causal rule - are combined multiplicatively into a probabilistic world model. By learning weights over these programmatic experts from limited demonstrations, the system produces accurate, stochastic simulators that generalize to new environments with minimal data.

    https://arxiv.org/abs/2505.10819


    About the Guest -

    Wasu Top Piriyakulkij is a PhD student at Cornell University advised by Kevin Ellis. His research focuses on symbolic world models, program synthesis, and human-like learning and exploration in artificial agents. He is particularly interested in how compositional structure enables generalization in complex environments.

    • https://www.cs.cornell.edu/~wp237/

    • https://scholar.google.com/citations?user=nlO1TkkAAAAJ&hl=en


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    58 Min.
  • Vision-Language Programs - Antonia Wüst
    Jan 19 2026

    Antonia Wüst, PhD student at TU Darmstadt, discusses her paper "Synthesizing Visual Concepts as Vision-Language Programs," which introduces a neuro-symbolic approach to visual concept induction by combining vision-language models with program synthesis.


    The work grew out of Wüst’s early PhD research on visual concept learning with symbolic programs, initially in synthetic domains, and her dissatisfaction with reliance on pre-trained object detectors. As vision-language models matured, the project evolved into a broader attempt to treat these models as perceptual tools embedded within a symbolic reasoning system.


    In This Episode -

    • Strengths & weaknesses of vision-language models (VLMs)

    • Visual concept induction

    • Symbol grounding across image sets

    • Designing a domain-specific language (DSL) for visual reasoning

    • A probabilistic context-free grammar for program search

    • Interpretability benefits of synthesized visual programs

    • Bongard problems and human-like abstraction


    References -

    • https://arxiv.org/abs/2511.18964

    • https://cs.stanford.edu/people/jcjohns/clevr/

    • https://en.wikipedia.org/wiki/Bongard_problem

    • https://wolfstam.github.io/

    • https://www.hikarushindo.com/

    • https://www.ml.informatik.tu-darmstadt.de/people/lhelff/index.html

    • https://ojs.aaai.org/index.php/AAAI/article/view/20616

    • https://arcprize.org/arc-agi


    About the Paper -

    “Synthesizing Visual Concepts as Vision-Language Programs”

    Antonia Wüst, Wolfgang Stammer, Hikaru Shindo, Lucas Nunes, Christian Kersting

    NeurIPS 2025


    The paper presents a neuro-symbolic framework that combines vision-language models with program synthesis to learn visual concepts from examples. Vision-language models provide grounded symbolic representations, while program synthesis performs explicit reasoning to derive interpretable and reliable visual rules.


    https://arxiv.org/abs/2511.18964


    About the Guest -

    Antonia Wüst is a PhD student at Technische Universität Darmstadt in the AI and Machine Learning Lab, supervised by Christian Kersting. Her research focuses on abstract visual reasoning, visual concept induction, and neuro-symbolic AI, with an emphasis on combining perception and symbolic reasoning.

    • https://www.ml.informatik.tu-darmstadt.de/people/awuest/index.html

    • https://x.com/toniwuest


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    54 Min.
  • Inductive Logic Programming - Andrew Cropper
    Jan 12 2026

    Andrew Cropper, logic luminary and creator of the popular Popper, discusses the paper "Inductive Logic Programming at 30: A New Introduction."This episode examines how inductive logic programming (ILP) learns symbolic rules from examples and background knowledge, and what it takes to build ILP systems that scale. As machine learning has shifted toward opaque, data-hungry models, ILP offers a path to interpretable, constrained programs learned from data. The paper distills 30 years of ideas (learning settings, bias, search, recursion, predicate invention, and system design) into a modern entry point for symbolic generalization.Cropper reflects on how the paper emerged alongside his work on Popper, a high-performance ILP system designed around falsification and solver-backed search. He traces this line of thinking back to his training under Stephen Muggleton, the most influencial researcher in ILP.In This Episode -• Inductive bias to constrain search.• Utilizing SAT/ASP-style engines as solver tools.• Why recursion is a decisive capability for true generalization on algorithmic tasks.• Predicate invention enabling more compact programs and better abstraction.• Popper’s core idea: learning by ruling out hypotheses via failures.• A practical research workflow advantage: adding constraints to prune search can yield orders-of-magnitude speedups without rewriting the learner.• ILP in the wild: scientific discovery loops (the "Robot Scientist" pattern), program-by-example tools (Flash Fill), and rule learning to guide RL agents.References -• https://arxiv.org/abs/2008.07912• https://github.com/logic-and-learning-lab/Popper/• https://www.cs.cmu.edu/~tom/mlbook.html• https://europepmc.org/abstract/MED/14724639• https://www.microsoft.com/en-us/research/publication/automating-string-processing-spreadsheets-using-input-output-examples/About the Paper -"Inductive logic programming at 30: a new introduction"Andrew Cropper, Sebastijan DumančićJournal of Artificial Intelligence Research (JAIR), 2022The paper explains how ILP learns symbolic rules from labeled examples plus background knowledge, and it breaks down ILP system design into learning settings, bias/representation choices, and search strategies. It also surveys major systems and practical limitations, framing modern ILP around solver-backed search, recursion, and predicate invention.https://arxiv.org/abs/2008.07912About the Guest -Andrew Cropper is an Associate Professor at the University of Helsinki and a principal investigator at ELLIS Institute Finland, where he works on combining logical reasoning with machine learning. His research centers on inductive logic programming and on building high-performance ILP systems (including Popper) that leverage modern SAT/ASP/MaxSAT solving to learn interpretable rules from data. https://andrewcropper.com/Credits -Host & Music: Bryan Landers, Technical Staff, NdeaEditor: Alejandro Ramirezhttps://x.com/ndeahttps://x.com/bryanlandershttps://ndea.com

    Mehr anzeigen Weniger anzeigen
    1 Std. und 5 Min.
  • Symbolic Linear Temporal Logic over Finite Traces Synthesis - Moshe Vardi
    Jan 5 2026

    Moshe Vardi, Professor at Rice University and one of the most influential figures in logic, verification, and theoretical computer science, discusses his paper “Symbolic LTLf Synthesis”.


    This conversation explores how Linear Temporal Logic over finite traces (LTLf) provides a more practical and scalable foundation for program and controller synthesis, especially compared to classical approaches based on infinite executions. The discussion traces the deep theoretical roots of synthesis in logic, automata, and games, while connecting them to modern challenges in AI, planning, and autonomy.


    Moshe shares the origin story of the paper, which grew out of collaborations with AI researchers and a visiting student, explains why “simpler” finite-trace reasoning turned out to be a strength rather than a limitation, and reflects on how LTLf has helped shift the direction of research and tooling in temporal synthesis.


    In This Episode -

    • The history of program synthesis

    • Why infinite traces dominate classical LTL

    • Linear Temporal Logic over finite traces (LTLf)

    • Automata theory in both verification and synthesis

    • Implications for reactive systems, autonomy, and agent design

    • The future of symbolic synthesis + neurosymbolic AI research


    About the Paper -

    “Symbolic LTLf Synthesis”

    Shufang Zhu, Lucas M. Tabajara, Jianwen Li, Geguang Pu, Moshe Y. Vardi

    IJCAI, 2017

    This paper introduces an automata-based approach to synthesizing controllers from LTLf specifications, leveraging the finite-trace setting to achieve simpler constructions and improved scalability. It demonstrates how symbolic techniques can make temporal synthesis more practical for AI and planning-oriented applications.

    https://arxiv.org/abs/1705.08426


    About the Guest -


    Moshe Vardi is University Professor at Rice University, where his research spans logic, automata theory, formal verification, and the foundations of computer science. He has played a central role in bringing temporal logic and model checking from theory into industrial practice, while also contributing to broader discussions about AI and society.

    https://www.cs.rice.edu/~vardi/


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Co-Host: Mark Santolucito, Assistant Professor, Barnard College/Columbia U

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    1 Std. und 17 Min.
  • Live @ NeurIPS 2025
    Dec 29 2025

    This is a special episode of the Abstract Synthesis podcast featuring a series of live interviews from NeurIPS 2025 in sunny San Diego, California.Rather than centering on a single paper, this episode captures a snapshot of the current research landscape at the Neural Information Processing Systems annual conference, highlighting how program synthesis and symbolic reasoning are increasingly intersecting with vision, reinforcement learning, world models, and scientific discovery. The conversations collectively illustrate why these ideas are resurfacing as key tools for efficiency, interpretability, and generalization.In This Episode -• Visual concept induction using program synthesis combined with vision-language models• Program synthesis as a tool for scientific discovery and interpretability in quantum computing• Symbolic world modeling and efficiency-driven learning in reinforcement learning environments• Composed program induction on structured domains such as the Rubik’s cube• Program synthesis as a lens for understanding brain, behavior, and neural representations• Interpretability perspectives framing neural network training as an implicit form of program synthesisGuests & Papers Mentioned -Clément Bonnet / Technical Staff, Ndeahttps://x.com/ClementBonnet16https://arxiv.org/abs/2411.08706Antonia Wüst / PhD Student, TU Darmstadthttps://ml-research.github.io/people/awuest/index.htmlhttps://arxiv.org/abs/2511.18964Leopoldo Sarra / Senior AI Research Scientist, Axiomatic_AIhttps://leopoldo.sarra.eu/https://arxiv.org/abs/2411.00230Wasu Top Piriyakulkij / PhD Student, Cornell Universityhttps://www.cs.cornell.edu/~wp237/https://topwasu.github.io/poe-worldJumyung Park / Student, Gwangju Institute of Science and Technology (GIST)https://jumyung-park.vercel.app/https://intrig.vercel.app/research/composed-program-induction-with-latent-program-lattice


    Colin "Coco" Conwell / Research Scientist, MIT

    https://colinconwell.com/

    https://openreview.net/forum?id=nuZBUyzBs0


    Chris Hamblin / Research Scientist, After Thought

    https://chrishamblin.xyz/Other Mentions -DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381ARC-AGI Benchmarkhttps://arcprize.org/arc-agiCredits -Host & Music: Bryan Landers, Technical Staff, NdeaCamera & Audio: Rowan RintalaEditor: Alejandro Ramirezhttps://x.com/ndeahttps://x.com/bryanlandershttps://ndea.com

    Mehr anzeigen Weniger anzeigen
    40 Min.
  • Program Synthesis and Non-Monotonic Reasoning - Kedar Namjoshi
    Dec 22 2025

    Leading formal methods researcher Kedar Namjoshi (Distinguished Member of Technical Staff, Nokia Bell Labs) discusses his extended abstract “Program Synthesis And Non-monotonic Reasoning”.This conversation explores why standard logical specifications, while well-suited for verification, can lead synthesis procedures to produce programs with unnecessary or undesirable actions, and how introducing minimality and preferences fundamentally changes the reasoning model required for synthesis.Kedar shares the research motivations behind the work, tracing its roots to non-monotonic logic and his experience synthesizing real-world control programs, and reflects on how practical scenarios like IoT device coordination expose deep gaps between formal specifications and human expectations of reasonable system behavior.In This Episode -• Why logical specifications are ideal for verification but problematic for synthesis• The notion of superfluous actions in synthesized programs• An IoT case study - door locks and refrigerators• How preference and minimality introduce non-monotonicity into synthesis• Compact strategies and minimizing behaviors in reactive systems• Checking minimality versus performing synthesis• Scaling synthesis to multi-agent systems (i.e., robotics)• Automaton-based approach limitations and LLM-based synthesisReferences -• https://synt2025.github.io/• https://link.springer.com/chapter/10.1007/978-3-030-99524-9_3• https://en.wikipedia.org/wiki/Alonzo_Church• https://dl.acm.org/doi/10.1145/3689624• https://www.youtube.com/watch?v=H53JDxt_cbI• https://www.youtube.com/watch?v=s6GKwRIQRf0About the Paper -“Program Synthesis And Non-monotonic Reasoning (Extended Abstract)”Kedar S. NamjoshiPresented at SYNT 2025 WorkshopThe paper shows that enforcing minimality in program synthesis breaks monotonic reasoning assumptions, reframing the avoidance of unnecessary actions as a problem of preference-based, non-monotonic reasoning.https://drive.google.com/file/d/1btOZ52OLD_RuvYc8KIkEk0rLDBn0lBvR/view?usp=sharingAbout the Guest -Kedar Namjoshi is a Distinguished Member of Technical Staff at Nokia Bell Labs. His research spans formal methods, program synthesis, verification, distributed systems, and the design of correct-by-construction systems, with an emphasis on bridging theory and real-world applications.https://kedar-namjoshi.github.ioCredits -Host & Music: Bryan Landers, Technical Staff, NdeaEditor: Alejandro Ramirezhttps://x.com/ndeahttps://x.com/bryanlandershttps://ndea.com

    Mehr anzeigen Weniger anzeigen
    44 Min.
  • Grammar Filtering For Syntax-Guided Synthesis - Mark Santolucito
    Dec 16 2025

    Leading program synthesis researcher Mark Santolucito (Assistant Professor, Barnard College, Columbia University) discusses his paper "Grammar Filtering for Syntax-Guided Synthesis".


    This conversation explores how machine learning can be used to shrink the search space of syntax-guided synthesis (SyGuS) problems, dramatically speeding up synthesis without sacrificing the strong correctness guarantees of formal methods.


    Mark shares the unexpected origin story of the paper, which began at a hackathon in Bermuda, explains the core ideas behind grammar filtering, and reflects on the broader role of neurosymbolic approaches in modern program synthesis.


    In This episode:

    • What program synthesis is and why it matters

    • Intro to syntax-guided synthesis (SyGuS)

    • Why grammar size dominates synthesis runtime

    • How machine learning can safely prune grammars before synthesis

    • Predicting both criticality and runtime impact of grammar terminals

    • Combining neural guidance with SMT-based synthesis solvers

    • Results from SyGuS benchmarks (including ~50% runtime improvements)

    • Reflections on the future of neurosymbolic program synthesis


    About the Paper:

    "Grammar Filtering for Syntax-Guided Synthesis"

    Kairo Morton, William Hallahan, Elven Shum, Ruzica Piskac, Mark Santolucito

    Published at AAAI 2020


    The paper introduces a machine-learning–based technique for identifying and removing low-utility grammar terminals in SyGuS problems, significantly accelerating synthesis while maintaining correctness guarantees.

    https://arxiv.org/abs/2002.02884


    About the Guest:

    Mark Santolucito is an Assistant Professor of Computer Science at Barnard College, Columbia University, where he develops novel program synthesis and analysis techniques to help programmers interact with code more effectively with a focus on Temporal Logic.

    https://www.marksantolucito.com


    Credits:

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://ndea.com

    https://x.com/bryanlanders

    Mehr anzeigen Weniger anzeigen
    26 Min.
  • Introducing Abstract Synthesis
    Dec 15 2025

    Welcome to Abstract Synthesis - a podcast where we share the stories behind interesting academic papers in the world of program synthesis.

    Brought to you by AGI research lab Ndea.

    Subscribe wherever you get your podcasts to stay tuned for in-depth, technical interviews with leaders in the space of symbolic AI.

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    1 Min.