MVSAnywhere: Zero-Shot Multi-View Stereo

Abstract

Computing accurate depth from multiple views is a fundamental and longstanding challenge in computer vision.However, most existing approaches do not generalize well across different domains and scene types (e.g. indoor vs outdoor). Training a general-purpose multi-view stereo model is challenging and raises several questions, e.g. how to best make use of transformer-based architectures, how to incorporate additional metadata when there is a variable number of input views, and how to estimate the range of valid depths which can vary considerably across different scenes and is typically not known a priori? To address these issues, we introduce MVSA, a novel and versatile Multi-View Stereo architecture that aims to work Anywhere by generalizing across diverse domains and depth ranges. MVSA combines monocular and multi-view cues with an adaptive cost volume to deal with scale-related issues. We demonstrate state-of-the-art zero-shot depth estimation on the Robust Multi-View Depth Benchmark, surpassing existing multi-view stereo and monocular baselines.

Cite

Text

Izquierdo et al. "MVSAnywhere: Zero-Shot Multi-View Stereo." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01073

Markdown

[Izquierdo et al. "MVSAnywhere: Zero-Shot Multi-View Stereo." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/izquierdo2025cvpr-mvsanywhere/) doi:10.1109/CVPR52734.2025.01073

BibTeX

@inproceedings{izquierdo2025cvpr-mvsanywhere,
  title     = {{MVSAnywhere: Zero-Shot Multi-View Stereo}},
  author    = {Izquierdo, Sergio and Sayed, Mohamed and Firman, Michael and Garcia-Hernando, Guillermo and Turmukhambetov, Daniyar and Civera, Javier and Aodha, Oisin Mac and Brostow, Gabriel and Watson, Jamie},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {11493-11504},
  doi       = {10.1109/CVPR52734.2025.01073},
  url       = {https://mlanthology.org/cvpr/2025/izquierdo2025cvpr-mvsanywhere/}
}