Empowering World Models with Reflection for Embodied Video Prediction
Abstract
Video generation models have made significant progress in simulating future states, showcasing their potential as world simulators in embodied scenarios. However, existing models often lack robust understanding, limiting their ability to perform multi-step predictions or handle Out-of-Distribution (OOD) scenarios. To address this challenge, we propose the Reflection of Generation (RoG), a set of intermediate reasoning strategies designed to enhance video prediction. It leverages the complementary strengths of pre-trained vision-language and video generation models, enabling them to function as a world model in embodied scenarios. To support RoG, we introduce Embodied Video Anticipation Benchmark(EVA-Bench), a comprehensive benchmark that evaluates embodied world models across diverse tasks and scenarios, utilizing both in-domain and OOD datasets. Building on this foundation, we devise a world model, Embodied Video Anticipator (EVA), that follows a multistage training paradigm to generate high-fidelity video frames and apply an autoregressive strategy to enable adaptive generalization for longer video sequences. Extensive experiments demonstrate the efficacy of EVA in various downstream tasks like video generation and robotics, thereby paving the way for large-scale pre-trained models in real-world video prediction applications. The video demos are available at https://sites.google.com/view/icml-eva.
Cite
Text
Chi et al. "Empowering World Models with Reflection for Embodied Video Prediction." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Chi et al. "Empowering World Models with Reflection for Embodied Video Prediction." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/chi2025icml-empowering/)BibTeX
@inproceedings{chi2025icml-empowering,
title = {{Empowering World Models with Reflection for Embodied Video Prediction}},
author = {Chi, Xiaowei and Fan, Chun-Kai and Zhang, Hengyuan and Qi, Xingqun and Zhang, Rongyu and Chen, Anthony and Chan, Chi-Min and Xue, Wei and Liu, Qifeng and Zhang, Shanghang and Guo, Yike},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {10383-10410},
volume = {267},
url = {https://mlanthology.org/icml/2025/chi2025icml-empowering/}
}