4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos
Abstract
We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos.
Cite
Text
Xu et al. "4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos." Advances in Neural Information Processing Systems, 2025.Markdown
[Xu et al. "4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/xu2025neurips-4dgt/)BibTeX
@inproceedings{xu2025neurips-4dgt,
title = {{4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos}},
author = {Xu, Zhen and Li, Zhengqin and Dong, Zhao and Zhou, Xiaowei and Newcombe, Richard and Lv, Zhaoyang},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/xu2025neurips-4dgt/}
}