Monocular Occupancy Prediction for Scalable Indoor Scenes
Abstract
Camera-based 3D occupancy prediction has recently garnered increasing attention in outdoor driving scenes. However, research in indoor scenes remains relatively unexplored. The core differences in indoor scenes lie in the complexity of scene scale and the variance in object size. In this paper, we propose a novel method, named , for predicting indoor scene occupancy using monocular images. harnesses the advantages of a pretrained depth model to achieve accurate depth predictions. Furthermore, we introduce the Dual Feature Line of Sight Projection (D-FLoSP) module within , which enhances the learning of 3D voxel features. To foster further research in this domain, we introduce Occ-ScanNet, a large-scale occupancy benchmark for indoor scenes. With a dataset size 40 times larger than the NYUv2 dataset, it facilitates future scalable research in indoor scene analysis. Experimental results on both NYUv2 and Occ-ScanNet demonstrate that our method achieves state-of-the-art performance. The dataset and code are made publicly at https://github.com/hongxiaoy/ ISO.git.
Cite
Text
Yu et al. "Monocular Occupancy Prediction for Scalable Indoor Scenes." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73404-5_3Markdown
[Yu et al. "Monocular Occupancy Prediction for Scalable Indoor Scenes." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yu2024eccv-monocular/) doi:10.1007/978-3-031-73404-5_3BibTeX
@inproceedings{yu2024eccv-monocular,
title = {{Monocular Occupancy Prediction for Scalable Indoor Scenes}},
author = {Yu, Hongxiao and Wang, Yuqi and Chen, Yuntao and Zhang, Zhaoxiang},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73404-5_3},
url = {https://mlanthology.org/eccv/2024/yu2024eccv-monocular/}
}