Robust Consistent Video Depth Estimation

Abstract

We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video. We integrate a learning-based depth prior, in the form of a convolutional neural network trained for single-image depth estimation, with geometric optimization, to estimate a smooth camera trajectory as well as detailed and stable depth reconstruction. Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details. In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures that contain a significant amount of noise, shake, motion blur, and rolling shutter deformations. Our method quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations, and attains favorable qualitative results across diverse wild datasets.

Cite

Text

Kopf et al. "Robust Consistent Video Depth Estimation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00166

Markdown

[Kopf et al. "Robust Consistent Video Depth Estimation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/kopf2021cvpr-robust/) doi:10.1109/CVPR46437.2021.00166

BibTeX

@inproceedings{kopf2021cvpr-robust,
  title     = {{Robust Consistent Video Depth Estimation}},
  author    = {Kopf, Johannes and Rong, Xuejian and Huang, Jia-Bin},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {1611-1621},
  doi       = {10.1109/CVPR46437.2021.00166},
  url       = {https://mlanthology.org/cvpr/2021/kopf2021cvpr-robust/}
}