SliceNet: Deep Dense Depth Estimation from a Single Indoor Panorama Using a Slice-Based Representation
Abstract
We introduce a novel deep neural network to estimate a depth map from a single monocular indoor panorama. The network directly works on the equirectangular projection, exploiting the properties of indoor 360 images. Starting from the fact that gravity plays an important role in the design and construction of man-made indoor scenes, we propose a compact representation of the scene into vertical slices of the sphere, and we exploit long- and short-term relationships among slices to recover the equirectangular depth map. Our design makes it possible to maintain high-resolution information in the extracted features even with a deep network. The experimental results demonstrate that our method outperforms current state-of-the-art solutions in prediction accuracy, particularly for real-world data.
Cite
Text
Pintore et al. "SliceNet: Deep Dense Depth Estimation from a Single Indoor Panorama Using a Slice-Based Representation." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01137Markdown
[Pintore et al. "SliceNet: Deep Dense Depth Estimation from a Single Indoor Panorama Using a Slice-Based Representation." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/pintore2021cvpr-slicenet/) doi:10.1109/CVPR46437.2021.01137BibTeX
@inproceedings{pintore2021cvpr-slicenet,
title = {{SliceNet: Deep Dense Depth Estimation from a Single Indoor Panorama Using a Slice-Based Representation}},
author = {Pintore, Giovanni and Agus, Marco and Almansa, Eva and Schneider, Jens and Gobbetti, Enrico},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {11536-11545},
doi = {10.1109/CVPR46437.2021.01137},
url = {https://mlanthology.org/cvpr/2021/pintore2021cvpr-slicenet/}
}