Scalable, Detailed and Mask-Free Universal Photometric Stereo
Abstract
In this paper, we introduce SDM-UniPS, a groundbreaking Scalable, Detailed, Mask-free, and Universal Photometric Stereo network. Our approach can recover astonishingly intricate surface normal maps, rivaling the quality of 3D scanners, even when images are captured under unknown, spatially-varying lighting conditions in uncontrolled environments. We have extended previous universal photometric stereo networks to extract spatial-light features, utilizing all available information in high-resolution input images and accounting for non-local interactions among surface points. Moreover, we present a new synthetic training dataset that encompasses a diverse range of shapes, materials, and illumination scenarios found in real-world scenes. Through extensive evaluation, we demonstrate that our method not only surpasses calibrated, lighting-specific techniques on public benchmarks, but also excels with a significantly smaller number of input images even without object masks.
Cite
Text
Ikehata. "Scalable, Detailed and Mask-Free Universal Photometric Stereo." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01268Markdown
[Ikehata. "Scalable, Detailed and Mask-Free Universal Photometric Stereo." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/ikehata2023cvpr-scalable/) doi:10.1109/CVPR52729.2023.01268BibTeX
@inproceedings{ikehata2023cvpr-scalable,
title = {{Scalable, Detailed and Mask-Free Universal Photometric Stereo}},
author = {Ikehata, Satoshi},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {13198-13207},
doi = {10.1109/CVPR52729.2023.01268},
url = {https://mlanthology.org/cvpr/2023/ikehata2023cvpr-scalable/}
}