Pay Attention to Devils: A Photometric Stereo Network for Better Details
Abstract
We present an attention-weighted loss in a photometric stereo neural network to improve 3D surface recovery accuracy in complex-structured areas, such as edges and crinkles, where existing learning-based methods often failed. Instead of using a uniform penalty for all pixels, our method employs the attention-weighted loss learned in a self-supervise manner for each pixel, avoiding blurry reconstruction result in such difficult regions. The network first estimates a surface normal map and an adaptive attention map, and then the latter is used to calculate a pixel-wise attention-weighted loss that focuses on complex regions. In these regions, the attention-weighted loss applies higher weights of the detail-preserving gradient loss to produce clear surface reconstructions. Experiments on real datasets show that our approach significantly outperforms traditional photometric stereo algorithms and state-of-the-art learning-based methods.
Cite
Text
Ju et al. "Pay Attention to Devils: A Photometric Stereo Network for Better Details." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/97Markdown
[Ju et al. "Pay Attention to Devils: A Photometric Stereo Network for Better Details." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/ju2020ijcai-pay/) doi:10.24963/IJCAI.2020/97BibTeX
@inproceedings{ju2020ijcai-pay,
title = {{Pay Attention to Devils: A Photometric Stereo Network for Better Details}},
author = {Ju, Yakun and Lam, Kin-Man and Chen, Yang and Qi, Lin and Dong, Junyu},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {694-700},
doi = {10.24963/IJCAI.2020/97},
url = {https://mlanthology.org/ijcai/2020/ju2020ijcai-pay/}
}