Long-Context State-Space Video World Models
Abstract
Video diffusion models have recently shown promise for world modeling through autoregressive frame prediction conditioned on actions. However, they struggle to maintain long-term memory due to the high computational cost associated with processing extended sequences in attention layers. To overcome this limitation, we propose a novel architecture leveraging state-space models (SSMs) to extend temporal memory without compromising computational efficiency. Unlike previous approaches that retrofit SSMs for non-causal vision tasks, our method fully exploits the inherent advantages of SSMs in causal sequence modeling. Central to our design is a block-wise SSM scanning scheme, which strategically trades off spatial consistency for extended temporal memory, combined with dense local attention to ensure coherence between consecutive frames. We evaluate the long-term memory capabilities of our model through spatial retrieval and reasoning tasks over extended horizons. Experiments on Memory Maze and Minecraft datasets demonstrate that our approach surpasses baselines in preserving long-range memory, while maintaining practical inference speeds suitable for interactive applications.
Cite
Text
Po et al. "Long-Context State-Space Video World Models." International Conference on Computer Vision, 2025.Markdown
[Po et al. "Long-Context State-Space Video World Models." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/po2025iccv-longcontext/)BibTeX
@inproceedings{po2025iccv-longcontext,
title = {{Long-Context State-Space Video World Models}},
author = {Po, Ryan and Nitzan, Yotam and Zhang, Richard and Chen, Berlin and Dao, Tri and Shechtman, Eli and Wetzstein, Gordon and Huang, Xun},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {8733-8744},
url = {https://mlanthology.org/iccv/2025/po2025iccv-longcontext/}
}