Zero-Shot Monocular Scene Flow Estimation in the Wild

Abstract

Large models have shown generalization across datasets for many low-level vision tasks, like depth estimation, but no such general models exist for scene flow.Even though scene flow prediction has wide potential, its practical use is limited because of the lack of generalization of current predictive models. We identify three key challenges and propose solutions for each. First, we create a method that jointly estimates geometry and motion for accurate prediction. Second, we alleviate scene flow data scarcity with a data recipe that affords us 1M annotated training samples across diverse synthetic scenes. Third, we evaluate different parameterizations for scene flow prediction and adopt a natural and effective parameterization. Our model outperforms existing methods as well as baselines built on large-scale models in terms of 3D end-point error, and shows zero-shot generalization to the casually captured videos from DAVIS and the robotic manipulation scenes from RoboTAP. Overall, our approach makes scene flow prediction more practical in-the-wild.

Cite

Text

Liang et al. "Zero-Shot Monocular Scene Flow Estimation in the Wild." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01959

Markdown

[Liang et al. "Zero-Shot Monocular Scene Flow Estimation in the Wild." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/liang2025cvpr-zeroshot/) doi:10.1109/CVPR52734.2025.01959

BibTeX

@inproceedings{liang2025cvpr-zeroshot,
  title     = {{Zero-Shot Monocular Scene Flow Estimation in the Wild}},
  author    = {Liang, Yiqing and Badki, Abhishek and Su, Hang and Tompkin, James and Gallo, Orazio},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {21031-21044},
  doi       = {10.1109/CVPR52734.2025.01959},
  url       = {https://mlanthology.org/cvpr/2025/liang2025cvpr-zeroshot/}
}