Depth Any Video with Scalable Synthetic Data

Abstract

Video depth estimation has long been hindered by the scarcity of consistent and scalable ground truth data, leading to inconsistent and unreliable results. In this paper, we introduce Depth Any Video, a model that tackles the challenge through two key innovations. First, we develop a scalable synthetic data pipeline, capturing real-time video depth data from diverse virtual environments, yielding 40,000 video clips of 5-second duration, each with precise depth annotations. Second, we leverage the powerful priors of generative video diffusion models to handle real-world videos effectively, integrating advanced techniques such as rotary position encoding and flow matching to further enhance flexibility and efficiency. Unlike previous models, which are limited to fixed-length video sequences, our approach introduces a novel mixed-duration training strategy that handles videos of varying lengths and performs robustly across different frame rates—even on single frames. At inference, we propose a depth interpolation method that enables our model to infer high-resolution video depth across sequences of up to 150 frames. Our model outperforms all previous generative depth models in terms of spatial accuracy and temporal consistency. The code and model weights are open-sourced.

Cite

Text

Yang et al. "Depth Any Video with Scalable Synthetic Data." International Conference on Learning Representations, 2025.

Markdown

[Yang et al. "Depth Any Video with Scalable Synthetic Data." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yang2025iclr-depth/)

BibTeX

@inproceedings{yang2025iclr-depth,
  title     = {{Depth Any Video with Scalable Synthetic Data}},
  author    = {Yang, Honghui and Huang, Di and Yin, Wei and Shen, Chunhua and Liu, Haifeng and He, Xiaofei and Lin, Binbin and Ouyang, Wanli and He, Tong},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yang2025iclr-depth/}
}