Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
Abstract
In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.
Cite
Text
Eigen and Fergus. "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture." International Conference on Computer Vision, 2015. doi:10.1109/ICCV.2015.304Markdown
[Eigen and Fergus. "Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture." International Conference on Computer Vision, 2015.](https://mlanthology.org/iccv/2015/eigen2015iccv-predicting/) doi:10.1109/ICCV.2015.304BibTeX
@inproceedings{eigen2015iccv-predicting,
title = {{Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture}},
author = {Eigen, David and Fergus, Rob},
booktitle = {International Conference on Computer Vision},
year = {2015},
doi = {10.1109/ICCV.2015.304},
url = {https://mlanthology.org/iccv/2015/eigen2015iccv-predicting/}
}