Indoor Panorama Planar 3D Reconstruction via Divide and Conquer
Abstract
Indoor panorama typically consists of human-made structures parallel or perpendicular to gravity. We leverage this phenomenon to approximate the scene in a 360-degree image with (H)orizontal-planes and (V)ertical-planes. To this end, we propose an effective divide-and-conquer strategy that divides pixels based on their plane orientation estimation; then, the succeeding instance segmentation module conquers the task of planes clustering more easily in each plane orientation group. Besides, parameters of V-planes depend on camera yaw rotation, but translation-invariant CNNs are less aware of the yaw change. We thus propose a yaw-invariant V-planar reparameterization for CNNs to learn. We create a benchmark for indoor panorama planar reconstruction by extending existing 360 depth datasets with ground truth H&V-planes (referred to as "PanoH&V" dataset) and adopt state-of-the-art planar reconstruction methods to predict H&V-planes as our baselines. Our method outperforms the baselines by a large margin on the proposed dataset.
Cite
Text
Sun et al. "Indoor Panorama Planar 3D Reconstruction via Divide and Conquer." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.01118Markdown
[Sun et al. "Indoor Panorama Planar 3D Reconstruction via Divide and Conquer." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/sun2021cvpr-indoor/) doi:10.1109/CVPR46437.2021.01118BibTeX
@inproceedings{sun2021cvpr-indoor,
title = {{Indoor Panorama Planar 3D Reconstruction via Divide and Conquer}},
author = {Sun, Cheng and Hsiao, Chi-Wei and Wang, Ning-Hsu and Sun, Min and Chen, Hwann-Tzong},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {11338-11347},
doi = {10.1109/CVPR46437.2021.01118},
url = {https://mlanthology.org/cvpr/2021/sun2021cvpr-indoor/}
}