MAIR: Multi-View Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation
Abstract
We propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying lighting. Because multi-view images provide a variety of information about the scene, multi-view images in object-level inverse rendering have been taken for granted. However, owing to the absence of multi-view HDR synthetic dataset, scene-level inverse rendering has mainly been studied using single-view image. We were able to successfully perform scene-level inverse rendering using multi-view images by expanding OpenRooms dataset and designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Our experiments show that the proposed method not only achieves better performance than single-view-based methods, but also achieves robust performance on unseen real-world scene. Also, our sophisticated 3D spatially-varying lighting volume allows for photorealistic object insertion in any 3D location.
Cite
Text
Choi et al. "MAIR: Multi-View Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00811Markdown
[Choi et al. "MAIR: Multi-View Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/choi2023cvpr-mair/) doi:10.1109/CVPR52729.2023.00811BibTeX
@inproceedings{choi2023cvpr-mair,
title = {{MAIR: Multi-View Attention Inverse Rendering with 3D Spatially-Varying Lighting Estimation}},
author = {Choi, JunYong and Lee, SeokYeong and Park, Haesol and Jung, Seung-Won and Kim, Ig-Jae and Cho, Junghyun},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {8392-8401},
doi = {10.1109/CVPR52729.2023.00811},
url = {https://mlanthology.org/cvpr/2023/choi2023cvpr-mair/}
}