DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion

Abstract

Current Structure-from-Motion (SfM) methods typically follow a two-stage pipeline, combining learned or geometric pairwise reasoning with a subsequent global optimization step. In contrast, we propose a data-driven multi-view reasoning approach that directly infers 3D scene geometry and camera poses from multi-view images. Our framework, DiffusionSfM, parameterizes scene geometry and cameras as pixel-wise ray origins and endpoints in a global frame and employs a transformer-based denoising diffusion model to predict them from multi-view inputs. To address practical challenges in training diffusion models with missing data and unbounded scene coordinates, we introduce specialized mechanisms that ensure robust learning. We empirically validate DiffusionSfM on both synthetic and real datasets, demonstrating that it outperforms classical and learning-based approaches while naturally modeling uncertainty.

Cite

Text

Zhao et al. "DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00592

Markdown

[Zhao et al. "DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/zhao2025cvpr-diffusionsfm/) doi:10.1109/CVPR52734.2025.00592

BibTeX

@inproceedings{zhao2025cvpr-diffusionsfm,
  title     = {{DiffusionSfM: Predicting Structure and Motion via Ray Origin and Endpoint Diffusion}},
  author    = {Zhao, Qitao and Lin, Amy and Tan, Jeff and Zhang, Jason Y. and Ramanan, Deva and Tulsiani, Shubham},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {6317-6326},
  doi       = {10.1109/CVPR52734.2025.00592},
  url       = {https://mlanthology.org/cvpr/2025/zhao2025cvpr-diffusionsfm/}
}