IM360: Large-Scale Indoor Mapping with 360 Cameras
Abstract
We present a novel 3D mapping pipeline for large-scale indoor environments. To address the significant challenges in large-scale indoor scenes, such as prevalent occlusions and textureless regions, we propose IM360, a novel approach that leverages the wide field of view of omnidirectional images and integrates the spherical camera model into the Structure-from-Motion (SfM) pipeline. Our SfM utilizes dense matching features specifically designed for 360 images, demonstrating superior capability in image registration. Furthermore, with the aid of mesh-based neural rendering techniques, we introduce a texture optimization method that refines texture maps and accurately captures view-dependent properties by combining diffuse and specular components. We evaluate our pipeline on large-scale indoor scenes, demonstrating its effectiveness in real-world scenarios. In practice, IM360 demonstrates superior performance, achieving a 3.5 PSNR increase in textured mesh reconstruction. We attain state-of-the-art performance in terms of camera localization and registration on Matterport3D and Stanford2D3D.
Cite
Text
Jung et al. "IM360: Large-Scale Indoor Mapping with 360 Cameras." International Conference on Computer Vision, 2025.Markdown
[Jung et al. "IM360: Large-Scale Indoor Mapping with 360 Cameras." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/jung2025iccv-im360/)BibTeX
@inproceedings{jung2025iccv-im360,
title = {{IM360: Large-Scale Indoor Mapping with 360 Cameras}},
author = {Jung, Dongki and Choi, Jaehoon and Lee, Yonghan and Manocha, Dinesh},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {29040-29050},
url = {https://mlanthology.org/iccv/2025/jung2025iccv-im360/}
}