DeMoN: Depth and Motion Network for Learning Monocular Stereo

Abstract

In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.

Cite

Text

Ummenhofer et al. "DeMoN: Depth and Motion Network for Learning Monocular Stereo." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.596

Markdown

[Ummenhofer et al. "DeMoN: Depth and Motion Network for Learning Monocular Stereo." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/ummenhofer2017cvpr-demon/) doi:10.1109/CVPR.2017.596

BibTeX

@inproceedings{ummenhofer2017cvpr-demon,
  title     = {{DeMoN: Depth and Motion Network for Learning Monocular Stereo}},
  author    = {Ummenhofer, Benjamin and Zhou, Huizhong and Uhrig, Jonas and Mayer, Nikolaus and Ilg, Eddy and Dosovitskiy, Alexey and Brox, Thomas},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.596},
  url       = {https://mlanthology.org/cvpr/2017/ummenhofer2017cvpr-demon/}
}