Toward Universal and Interpretable World Models for Open-Ended Learning Agents
Abstract
We introduce a generic, compositional and interpretable class of generative world models that supports open-ended learning agents. This is a sparse class of Bayesian networks capable of approximating a broad range of stochastic processes, which provide agents with the ability to learn world models in a manner that may be both interpretable and computationally scalable. This approach integrating Bayesian structure learning and intrinsically motivated (model-based) planning enables agents to actively develop and refine their world models, which may lead to developmental learning and more robust, adaptive behavior.
Cite
Text
Da Costa. "Toward Universal and Interpretable World Models for Open-Ended Learning Agents." NeurIPS 2024 Workshops: IMOL, 2024.Markdown
[Da Costa. "Toward Universal and Interpretable World Models for Open-Ended Learning Agents." NeurIPS 2024 Workshops: IMOL, 2024.](https://mlanthology.org/neuripsw/2024/costa2024neuripsw-universal/)BibTeX
@inproceedings{costa2024neuripsw-universal,
title = {{Toward Universal and Interpretable World Models for Open-Ended Learning Agents}},
author = {Da Costa, Lancelot},
booktitle = {NeurIPS 2024 Workshops: IMOL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/costa2024neuripsw-universal/}
}