Abstract Synthesis Titelbild

Abstract Synthesis

Abstract Synthesis

Von: Ndea
Jetzt kostenlos hören, ohne Abo

Nur 0,99 € pro Monat für die ersten 3 Monate

Danach 9.95 € pro Monat. Bedingungen gelten.

Über diesen Titel

Go beyond the paper abstract to synthesize new ideas. AGI research lab Ndea presents the stories behind remarkable academic papers in the field of program synthesis.Ndea
  • Symbolic World Models - Top Piriyakulkij
    Jan 26 2026

    Wasu "Top" Piriyakulkij, PhD student at Cornell University advised by Kevin Ellis, discusses his paper "PoE-World: Compositional World Modeling with Products of Programmatic Experts." The episode explores how symbolic, programmatic world models can achieve strong generalization and sample efficiency by composing many small causal programs instead of learning a single monolithic model.

    The conversation traces how PoE-World emerged from earlier work on active concept learning and hypothesis testing, and how object-centric Atari environments became a natural testbed for scaling symbolic world models beyond grid worlds. Piriyakulkij reflects on design failures, surprising successes, and the moment the learned world model became interactive enough to serve as a real-time simulator.


    In This Episode -

    • Symbolic vs. neural world models

    • Products of programmatic experts

    • Modular causal rules as world models

    • Object-centric Atari environments

    • Montezuma’s Revenge as exploration benchmark

    • Sample-efficient learning from demonstrations

    • Weights as expert confidence signals

    • World models as executable simulators

    • Exploration as program testing


    References -

    • WorldCoder - https://arxiv.org/abs/2402.12275

    • Object-Centric Atari - https://arxiv.org/abs/2306.08649v2

    • ARC-AGI-3 - https://arcprize.org

    • VisualPredicator - https://arxiv.org/abs/2410.23156

    • People: Marvin Minsky, François Chollet, Armando Solar-Lezama


    About the Paper -

    "PoE-World: Compositional World Modeling with Products of Programmatic Experts"

    Authors: Wasu Top Piriyakulkij, Yishou Wang, Hao Tang, Martha Lewis, Kevin Ellis

    The paper introduces a symbolic world modeling framework in which many small, interpretable programs - each encoding a simple causal rule - are combined multiplicatively into a probabilistic world model. By learning weights over these programmatic experts from limited demonstrations, the system produces accurate, stochastic simulators that generalize to new environments with minimal data.

    https://arxiv.org/abs/2505.10819


    About the Guest -

    Wasu Top Piriyakulkij is a PhD student at Cornell University advised by Kevin Ellis. His research focuses on symbolic world models, program synthesis, and human-like learning and exploration in artificial agents. He is particularly interested in how compositional structure enables generalization in complex environments.

    • https://www.cs.cornell.edu/~wp237/

    • https://scholar.google.com/citations?user=nlO1TkkAAAAJ&hl=en


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    58 Min.
  • Vision-Language Programs - Antonia Wüst
    Jan 19 2026

    Antonia Wüst, PhD student at TU Darmstadt, discusses her paper "Synthesizing Visual Concepts as Vision-Language Programs," which introduces a neuro-symbolic approach to visual concept induction by combining vision-language models with program synthesis.


    The work grew out of Wüst’s early PhD research on visual concept learning with symbolic programs, initially in synthetic domains, and her dissatisfaction with reliance on pre-trained object detectors. As vision-language models matured, the project evolved into a broader attempt to treat these models as perceptual tools embedded within a symbolic reasoning system.


    In This Episode -

    • Strengths & weaknesses of vision-language models (VLMs)

    • Visual concept induction

    • Symbol grounding across image sets

    • Designing a domain-specific language (DSL) for visual reasoning

    • A probabilistic context-free grammar for program search

    • Interpretability benefits of synthesized visual programs

    • Bongard problems and human-like abstraction


    References -

    • https://arxiv.org/abs/2511.18964

    • https://cs.stanford.edu/people/jcjohns/clevr/

    • https://en.wikipedia.org/wiki/Bongard_problem

    • https://wolfstam.github.io/

    • https://www.hikarushindo.com/

    • https://www.ml.informatik.tu-darmstadt.de/people/lhelff/index.html

    • https://ojs.aaai.org/index.php/AAAI/article/view/20616

    • https://arcprize.org/arc-agi


    About the Paper -

    “Synthesizing Visual Concepts as Vision-Language Programs”

    Antonia Wüst, Wolfgang Stammer, Hikaru Shindo, Lucas Nunes, Christian Kersting

    NeurIPS 2025


    The paper presents a neuro-symbolic framework that combines vision-language models with program synthesis to learn visual concepts from examples. Vision-language models provide grounded symbolic representations, while program synthesis performs explicit reasoning to derive interpretable and reliable visual rules.


    https://arxiv.org/abs/2511.18964


    About the Guest -

    Antonia Wüst is a PhD student at Technische Universität Darmstadt in the AI and Machine Learning Lab, supervised by Christian Kersting. Her research focuses on abstract visual reasoning, visual concept induction, and neuro-symbolic AI, with an emphasis on combining perception and symbolic reasoning.

    • https://www.ml.informatik.tu-darmstadt.de/people/awuest/index.html

    • https://x.com/toniwuest


    Credits -

    Host & Music: Bryan Landers, Technical Staff, Ndea

    Editor: Alejandro Ramirez

    https://x.com/ndea

    https://x.com/bryanlanders

    https://ndea.com

    Mehr anzeigen Weniger anzeigen
    54 Min.
  • Inductive Logic Programming - Andrew Cropper
    Jan 12 2026

    Andrew Cropper, logic luminary and creator of the popular Popper, discusses the paper "Inductive Logic Programming at 30: A New Introduction."This episode examines how inductive logic programming (ILP) learns symbolic rules from examples and background knowledge, and what it takes to build ILP systems that scale. As machine learning has shifted toward opaque, data-hungry models, ILP offers a path to interpretable, constrained programs learned from data. The paper distills 30 years of ideas (learning settings, bias, search, recursion, predicate invention, and system design) into a modern entry point for symbolic generalization.Cropper reflects on how the paper emerged alongside his work on Popper, a high-performance ILP system designed around falsification and solver-backed search. He traces this line of thinking back to his training under Stephen Muggleton, the most influencial researcher in ILP.In This Episode -• Inductive bias to constrain search.• Utilizing SAT/ASP-style engines as solver tools.• Why recursion is a decisive capability for true generalization on algorithmic tasks.• Predicate invention enabling more compact programs and better abstraction.• Popper’s core idea: learning by ruling out hypotheses via failures.• A practical research workflow advantage: adding constraints to prune search can yield orders-of-magnitude speedups without rewriting the learner.• ILP in the wild: scientific discovery loops (the "Robot Scientist" pattern), program-by-example tools (Flash Fill), and rule learning to guide RL agents.References -• https://arxiv.org/abs/2008.07912• https://github.com/logic-and-learning-lab/Popper/• https://www.cs.cmu.edu/~tom/mlbook.html• https://europepmc.org/abstract/MED/14724639• https://www.microsoft.com/en-us/research/publication/automating-string-processing-spreadsheets-using-input-output-examples/About the Paper -"Inductive logic programming at 30: a new introduction"Andrew Cropper, Sebastijan DumančićJournal of Artificial Intelligence Research (JAIR), 2022The paper explains how ILP learns symbolic rules from labeled examples plus background knowledge, and it breaks down ILP system design into learning settings, bias/representation choices, and search strategies. It also surveys major systems and practical limitations, framing modern ILP around solver-backed search, recursion, and predicate invention.https://arxiv.org/abs/2008.07912About the Guest -Andrew Cropper is an Associate Professor at the University of Helsinki and a principal investigator at ELLIS Institute Finland, where he works on combining logical reasoning with machine learning. His research centers on inductive logic programming and on building high-performance ILP systems (including Popper) that leverage modern SAT/ASP/MaxSAT solving to learn interpretable rules from data. https://andrewcropper.com/Credits -Host & Music: Bryan Landers, Technical Staff, NdeaEditor: Alejandro Ramirezhttps://x.com/ndeahttps://x.com/bryanlandershttps://ndea.com

    Mehr anzeigen Weniger anzeigen
    1 Std. und 5 Min.
Noch keine Rezensionen vorhanden