Unsupervised Learning of Depth and Ego-Motion from Video

Abstract

We present an unsupervised learning framework for the task of dense 3D geometry and camera motion estimation from unstructured video sequences. In common with recent work, we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to these works, our method is completely unsupervised, requiring only a sequence of images as input. We achieve this with a network that estimates the 6-DoF camera pose parameters of the input set, along with dense depth for a reference view using single-view inference. Our loss is constructed by projecting the nearby posed views into the reference view via the depth map. Results using the KITTI dataset demonstrate the effectiveness of our approach, which performs on par with another deep learning approach that assumes ground-truth pose information at training time.

Cite

Text

Zhou et al. "Unsupervised Learning of Depth and Ego-Motion from Video." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.700

Markdown

[Zhou et al. "Unsupervised Learning of Depth and Ego-Motion from Video." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/zhou2017cvpr-unsupervised/) doi:10.1109/CVPR.2017.700

BibTeX

@inproceedings{zhou2017cvpr-unsupervised,
  title     = {{Unsupervised Learning of Depth and Ego-Motion from Video}},
  author    = {Zhou, Tinghui and Brown, Matthew and Snavely, Noah and Lowe, David G.},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2017},
  doi       = {10.1109/CVPR.2017.700},
  url       = {https://mlanthology.org/cvpr/2017/zhou2017cvpr-unsupervised/}
}