Deep Reflectance Maps
Abstract
Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in unconstrained setting is still limited. We propose a fully convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application our method to a range of image-based editing tasks on real images.
Cite
Text
Rematas et al. "Deep Reflectance Maps." Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.488Markdown
[Rematas et al. "Deep Reflectance Maps." Conference on Computer Vision and Pattern Recognition, 2016.](https://mlanthology.org/cvpr/2016/rematas2016cvpr-deep/) doi:10.1109/CVPR.2016.488BibTeX
@inproceedings{rematas2016cvpr-deep,
title = {{Deep Reflectance Maps}},
author = {Rematas, Konstantinos and Ritschel, Tobias and Fritz, Mario and Gavves, Efstratios and Tuytelaars, Tinne},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2016},
doi = {10.1109/CVPR.2016.488},
url = {https://mlanthology.org/cvpr/2016/rematas2016cvpr-deep/}
}