AutoScape: Geometry-Consistent Long-Horizon Scene Generation

Abstract

This paper proposes AutoScape, a long-horizon driving scene generation framework. At its core is a novel RGB-D diffusion model that iteratively generates sparse, geometrically consistent keyframes, serving as reliable anchors for the scene's appearance and geometry. To maintain long-range geometric consistency, the model 1) jointly handles image and depth in a shared latent space, 2) explicitly conditions on the existing scene geometry (i.e., rendered point clouds) from previously generated keyframes, and 3) steers the sampling process with a warp-consistent guidance. Given high-quality RGB-D keyframes, a video diffusion model then interpolates between them to produce dense and coherent video frames. AutoScape generates realistic and geometrically consistent driving videos of over 20 seconds, improving the long-horizon FID and FVD scores over the prior state-of-the-art by 48.6% and 43.0%, respectively.

Cite

Text

Chen et al. "AutoScape: Geometry-Consistent Long-Horizon Scene Generation." International Conference on Computer Vision, 2025.

Markdown

[Chen et al. "AutoScape: Geometry-Consistent Long-Horizon Scene Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/chen2025iccv-autoscape/)

BibTeX

@inproceedings{chen2025iccv-autoscape,
  title     = {{AutoScape: Geometry-Consistent Long-Horizon Scene Generation}},
  author    = {Chen, Jiacheng and Jiang, Ziyu and Liang, Mingfu and Zhuang, Bingbing and Su, Jong-Chyi and Garg, Sparsh and Wu, Ying and Chandraker, Manmohan},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {25700-25711},
  url       = {https://mlanthology.org/iccv/2025/chen2025iccv-autoscape/}
}