Pano2CAD: Room Layout from a Single Panorama Image
Abstract
This paper presents a method of estimating the geometry of a room and the 3D pose of objects from a single 360 panorama image. Assuming ManhattanWorld geometry, we formulate the task as an inference problem in which we estimate positions and orientations of walls and objects. The method combines surface normal estimation, 2D object detection and 3D object pose estimation. Quantitative results are presented on a dataset of synthetically generated 3D rooms containing objects, as well as on a subset of handlabeled images from the public SUN360 dataset.
Cite
Text
Xu et al. "Pano2CAD: Room Layout from a Single Panorama Image." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017. doi:10.1109/WACV.2017.46Markdown
[Xu et al. "Pano2CAD: Room Layout from a Single Panorama Image." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017.](https://mlanthology.org/wacv/2017/xu2017wacv-pano/) doi:10.1109/WACV.2017.46BibTeX
@inproceedings{xu2017wacv-pano,
title = {{Pano2CAD: Room Layout from a Single Panorama Image}},
author = {Xu, Jiu and Stenger, Björn and Kerola, Tommi and Tung, Tony},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2017},
pages = {354-362},
doi = {10.1109/WACV.2017.46},
url = {https://mlanthology.org/wacv/2017/xu2017wacv-pano/}
}