Learning 3D Persistent Embodied World Models
Abstract
The ability to simulate the effects of future actions on the world is a crucial ability of intelligent embodied agents, enabling agents to anticipate the effects of their actions and make plans accordingly. While a large body of existing work has explored how to construct such world models using video models, they are often myopic in nature, without any memory of a scene not captured by currently observed images, preventing agents from making consistent long-horizon plans in complex environments where many parts of the scene are partially observed. We introduce a new persistent embodied world model with an explicit memory of previously generated content, enabling much more consistent long-horizon simulation. During generation time, our video diffusion model predicts RGB-D video of the future observations of the agent. This generation is then aggregated into a persistent 3D map of the environment. By conditioning the video model on this 3D spatial map, we illustrate how this enables video world models to faithfully simulate both seen and unseen parts of the world. Finally, we illustrate the efficacy of such a world model in downstream embodied applications, enabling effective planning and policy learning.
Cite
Text
Zhou et al. "Learning 3D Persistent Embodied World Models." Advances in Neural Information Processing Systems, 2025.Markdown
[Zhou et al. "Learning 3D Persistent Embodied World Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhou2025neurips-learning-a/)BibTeX
@inproceedings{zhou2025neurips-learning-a,
title = {{Learning 3D Persistent Embodied World Models}},
author = {Zhou, Siyuan and Du, Yilun and Yang, Yuncong and Han, Lei and Chen, Peihao and Yeung, Dit-Yan and Gan, Chuang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/zhou2025neurips-learning-a/}
}