DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation

Abstract

Current generative models struggle to synthesize dynamic 4D driving scenes that simultaneously support temporal extrapolation and spatial novel view synthesis (NVS) without per-scene optimization. A key challenge lies in finding an efficient and generalizable geometric representation that seamlessly connects temporal and spatial synthesis. To address this, we propose DiST-4D, the first disentangled spatiotemporal diffusion framework for 4D driving scene generation, which leverages metric depth as the core geometric representation. DiST-4D decomposes the problem into two diffusion processes: DiST-T, which predicts future metric depth and multi-view RGB sequences directly from past observations, and DiST-S, which enables spatial NVS by training only on existing viewpoints while enforcing cycle consistency. This cycle consistency mechanism introduces a forward-backward rendering constraint, reducing the generalization gap between observed and unseen viewpoints. Metric depth is essential for both accurate reliable forecasting and accurate spatial NVS, as it provides a view-consistent geometric representation that generalizes well to unseen perspectives. Experiments demonstrate that DiST-4D achieves state-of-the-art performance in both temporal prediction and NVS tasks, while also delivering competitive performance in planning-related evaluations. The Project is available at https://royalmelon0505.github.io/DiST-4D

Cite

Text

Guo et al. "DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation." International Conference on Computer Vision, 2025.

Markdown

[Guo et al. "DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/guo2025iccv-dist4d/)

BibTeX

@inproceedings{guo2025iccv-dist4d,
  title     = {{DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation}},
  author    = {Guo, Jiazhe and Ding, Yikang and Chen, Xiwu and Chen, Shuo and Li, Bohan and Zou, Yingshuang and Lyu, Xiaoyang and Tan, Feiyang and Qi, Xiaojuan and Li, Zhiheng and Zhao, Hao},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {27231-27241},
  url       = {https://mlanthology.org/iccv/2025/guo2025iccv-dist4d/}
}