Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos

Abstract

Recent advancements in static feed-forward scene reconstruction have demonstrated significant progress in high-quality novel view synthesis. However, these models often struggle with generalizability across diverse environments and fail to effectively handle dynamic content. We present BTimer (short for Bullet Timer), the first motion-aware feed-forward model for real-time reconstruction and novel view synthesis of dynamic scenes. Our approach reconstructs the full scene in a 3D Gaussian Splatting representation at a given target (‘bullet’) timestamp by aggregating information from all the context frames. Such a formulation allows BTimer to gain scalability and generalization by leveraging both static and dynamic scene datasets. Given a casual monocular dynamic video, BTimer reconstructs a bullet-time scene within 150ms while reaching state-of-the-art performance on both static and dynamic scene datasets, even compared with optimization-based approaches.

Cite

Text

Liang et al. "Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos." Advances in Neural Information Processing Systems, 2025.

Markdown

[Liang et al. "Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/liang2025neurips-feedforward/)

BibTeX

@inproceedings{liang2025neurips-feedforward,
  title     = {{Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos}},
  author    = {Liang, Hanxue and Ren, Jiawei and Mirzaei, Ashkan and Torralba, Antonio and Liu, Ziwei and Gilitschenski, Igor and Fidler, Sanja and Oztireli, Cengiz and Ling, Huan and Gojcic, Zan and Huang, Jiahui},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/liang2025neurips-feedforward/}
}