DeepSFM: Structure from Motion via Deep Bundle Adjustment

Abstract

Structure from motion (SfM) is an essential computer vision problem which has not been well handled by deep learning. One of the promising trends is to apply explicit structural constraint, e.g. 3D cost volume, into the network. However, existing methods usually assume accurate camera poses either from GT or other methods, which is unrealistic in practice. In this work, we design a physical driven architecture, namely DeepSFM, inspired by traditional Bundle Adjustment (BA), which consists of two cost volume based architectures for depth and pose estimation respectively, iteratively running to improve both. The explicit constraints on both depth (structure) and pose (motion), when combined with the learning components, bring the merit from both traditional BA and emerging deep learning technology. Extensive experiments on various datasets show that our model achieves the state-of-the-art performance on both depth and pose estimation with superior robustness against less number of inputs and the noise in initialization.

Cite

Text

Wei et al. "DeepSFM: Structure from Motion via Deep Bundle Adjustment." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58452-8_14

Markdown

[Wei et al. "DeepSFM: Structure from Motion via Deep Bundle Adjustment." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/wei2020eccv-deepsfm/) doi:10.1007/978-3-030-58452-8_14

BibTeX

@inproceedings{wei2020eccv-deepsfm,
  title     = {{DeepSFM: Structure from Motion via Deep Bundle Adjustment}},
  author    = {Wei, Xingkui and Zhang, Yinda and Li, Zhuwen and Fu, Yanwei and Xue, Xiangyang},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58452-8_14},
  url       = {https://mlanthology.org/eccv/2020/wei2020eccv-deepsfm/}
}