Deep Photometric Stereo Network
Abstract
This paper presents a photometric stereo method based on deep learning. One of the major difficulties in photometric stereo is designing can appropriate reflectance model that is both capable of representing real-world reflectances and computationally tractable in terms of deriving surface normal. Unlike previous photometric stereo methods that rely on a simplified parametric image formation model, such as the Lambert's model, the proposed method aims at establishing a flexible mapping between complex reflectance observations and surface normal by the use of a deep neural network. As a result we propose a deep photometric stereo network (DPSN) that takes reflectance observations under varying light directions and infers the corresponding surface normal per pixel. To make the DPSN applicable to real-world objects, a database of measured bidirectional reflectance distribution functions (MERL BRDF database) has been used for training the network. Evaluation using simulation and real-world scenes shows effectiveness of the proposed approach over previous techniques.
Cite
Text
Santo et al. "Deep Photometric Stereo Network." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.66Markdown
[Santo et al. "Deep Photometric Stereo Network." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/santo2017iccvw-deep/) doi:10.1109/ICCVW.2017.66BibTeX
@inproceedings{santo2017iccvw-deep,
title = {{Deep Photometric Stereo Network}},
author = {Santo, Hiroaki and Samejima, Masaki and Sugano, Yusuke and Shi, Boxin and Matsushita, Yasuyuki},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {501-509},
doi = {10.1109/ICCVW.2017.66},
url = {https://mlanthology.org/iccvw/2017/santo2017iccvw-deep/}
}