VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Abstract
Broadly intelligent agents should form task-specific abstractions that selectively expose the essential elements of a task, while abstracting away the complexity of the raw sensorimotor space. In this work, we present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations. We outline an online algorithm for inventing such predicates and learning abstract world models. We compare our approach to hierarchical reinforcement learning, vision-language model planning, and symbolic predicate invention approaches, on both in- and out-of-distribution tasks across five simulated robotic domains. Results show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
Cite
Text
Liang et al. "VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning." International Conference on Learning Representations, 2025.Markdown
[Liang et al. "VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/liang2025iclr-visualpredicator/)BibTeX
@inproceedings{liang2025iclr-visualpredicator,
title = {{VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning}},
author = {Liang, Yichao and Kumar, Nishanth and Tang, Hao and Weller, Adrian and Tenenbaum, Joshua B. and Silver, Tom and Henriques, Joao F. and Ellis, Kevin},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/liang2025iclr-visualpredicator/}
}