Designing Deep Networks for Surface Normal Estimation

Abstract

In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture? We propose to build upon the decades of hard work in 3D scene understanding to design a new CNN architecture for the task of surface normal estimation. We show that incorporating several constraints (man-made, Manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.

Cite

Text

Wang et al. "Designing Deep Networks for Surface Normal Estimation." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298652

Markdown

[Wang et al. "Designing Deep Networks for Surface Normal Estimation." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/wang2015cvpr-designing/) doi:10.1109/CVPR.2015.7298652

BibTeX

@inproceedings{wang2015cvpr-designing,
  title     = {{Designing Deep Networks for Surface Normal Estimation}},
  author    = {Wang, Xiaolong and Fouhey, David and Gupta, Abhinav},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2015},
  doi       = {10.1109/CVPR.2015.7298652},
  url       = {https://mlanthology.org/cvpr/2015/wang2015cvpr-designing/}
}