Neural Kaleidoscopic Space Sculpting
Abstract
We introduce a method that recovers full-surround 3D reconstructions from a single kaleidoscopic image using a neural surface representation. Full-surround 3D reconstruction is critical for many applications, such as augmented and virtual reality. A kaleidoscope, which uses a single camera and multiple mirrors, is a convenient way of achieving full-surround coverage, as it redistributes light directions and thus captures multiple viewpoints in a single image. This enables single-shot and dynamic full-surround 3D reconstruction. However, using a kaleidoscopic image for multi-view stereo is challenging, as we need to decompose the image into multi-view images by identifying which pixel corresponds to which virtual camera, a process we call labeling. To address this challenge, pur approach avoids the need to explicitly estimate labels, but instead "sculpts" a neural surface representation through the careful use of silhouette, background, foreground, and texture information present in the kaleidoscopic image. We demonstrate the advantages of our method in a range of simulated and real experiments, on both static and dynamic scenes.
Cite
Text
Ahn et al. "Neural Kaleidoscopic Space Sculpting." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00423Markdown
[Ahn et al. "Neural Kaleidoscopic Space Sculpting." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/ahn2023cvpr-neural/) doi:10.1109/CVPR52729.2023.00423BibTeX
@inproceedings{ahn2023cvpr-neural,
title = {{Neural Kaleidoscopic Space Sculpting}},
author = {Ahn, Byeongjoo and De Zeeuw, Michael and Gkioulekas, Ioannis and Sankaranarayanan, Aswin C.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {4349-4358},
doi = {10.1109/CVPR52729.2023.00423},
url = {https://mlanthology.org/cvpr/2023/ahn2023cvpr-neural/}
}