A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images

Abstract

Estimating depth from a single RGB image is an ill-posed and inherently ambiguous problem. State-of-the-art deep learning methods can now estimate accurate 2D depth maps, but when the maps are projected into 3D, they lack local detail and are often highly distorted. We propose a fast-to-train two-streamed CNN that predicts depth and depth gradients, which are then fused together into an accurate and detailed depth map. We also define a novel set loss over multiple images; by regularizing the estimation between a common set of images, the network is less prone to over-fitting and achieves better accuracy than competing methods. Experiments on the NYU Depth v2 dataset shows that our depth predictions are competitive with state-of-the-art and lead to faithful 3D projections.

Cite

Text

Li et al. "A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images." International Conference on Computer Vision, 2017. doi:10.1109/ICCV.2017.365

Markdown

[Li et al. "A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images." International Conference on Computer Vision, 2017.](https://mlanthology.org/iccv/2017/li2017iccv-twostreamed/) doi:10.1109/ICCV.2017.365

BibTeX

@inproceedings{li2017iccv-twostreamed,
  title     = {{A Two-Streamed Network for Estimating Fine-Scaled Depth Maps from Single RGB Images}},
  author    = {Li, Jun and Klein, Reinhard and Yao, Angela},
  booktitle = {International Conference on Computer Vision},
  year      = {2017},
  doi       = {10.1109/ICCV.2017.365},
  url       = {https://mlanthology.org/iccv/2017/li2017iccv-twostreamed/}
}