Video Prediction Transformers Without Recurrence or Convolution
Abstract
Video prediction has witnessed the emergence of RNN-based models led by ConvLSTM, and CNN-based models led by SimVP. Following the significant success of ViT, recent works have integrated ViT into both RNN and CNN frameworks, achieving improved performance. While we appreciate these prior approaches, we raise a fundamental question: Is there a simpler yet more effective solution that can eliminate the high computational cost of RNNs while addressing the limited receptive fields and poor generalization of CNNs? How far can it go with a simple pure transformer model for video prediction? In this paper, we propose PredFormer, a framework entirely based on Gated Transformers. We provide a comprehensive analysis of 3D Attention in the context of video prediction. Extensive experiments demonstrate that PredFormer delivers state-of-the-art performance across four standard benchmarks. The significant improvements in both accuracy and efficiency highlight the potential of PredFormer as a strong baseline for real-world video prediction applications. The source code and trained models will be released to the public.
Cite
Text
Tang et al. "Video Prediction Transformers Without Recurrence or Convolution." Transactions on Machine Learning Research, 2026.Markdown
[Tang et al. "Video Prediction Transformers Without Recurrence or Convolution." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/tang2026tmlr-video/)BibTeX
@article{tang2026tmlr-video,
title = {{Video Prediction Transformers Without Recurrence or Convolution}},
author = {Tang, Yujin and Qi, Lu and Li, Xiangtai and Ma, Chao and Yang, Ming-Hsuan},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/tang2026tmlr-video/}
}