Joint 3D Layout and Depth Prediction from a Single Indoor Panorama Image
Abstract
In this paper, we propose a method which jointly learns layout prediction and depth estimation from a single indoor panorama image. Previous methods have considered layout prediction and depth estimation from a single panorama image separately. However, these two tasks are tightly intertwined. Leveraging the layout depth map as an intermediate representation, our proposed method outperforms existing methods for both panorama layout prediction and depth estimation. Experiments on the challenging real-world dataset of Stanford 2D-3D demonstrate that our approach obtains superior performance for both the layout prediction tasks (3D IoU: 85.81% v.s. 79.79%) and the depth estimation (Abs Rel: 0.068 v.s. 0.079).
Cite
Text
Zeng et al. "Joint 3D Layout and Depth Prediction from a Single Indoor Panorama Image." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58517-4_39Markdown
[Zeng et al. "Joint 3D Layout and Depth Prediction from a Single Indoor Panorama Image." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zeng2020eccv-joint/) doi:10.1007/978-3-030-58517-4_39BibTeX
@inproceedings{zeng2020eccv-joint,
title = {{Joint 3D Layout and Depth Prediction from a Single Indoor Panorama Image}},
author = {Zeng, Wei and Karaoglu, Sezer and Gevers, Theo},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58517-4_39},
url = {https://mlanthology.org/eccv/2020/zeng2020eccv-joint/}
}