PVNN: A Neural Network Library for Photometric Vision
Abstract
In this paper we show how a differentiable, physics-based renderer suitable for photometric vision tasks can be implemented as layers in a deep neural network. The layers include geometric operations for representation transformations, reflectance evaluations with arbitrary numbers of light sources and statistical bidirectional reflectance distribution function (BRDF) models. We make an implementation of these layers available as a neural network library (PVNN) for Theano. The layers can be incorporated into any neural network architecture, allowing parts of the photometric image formation process to be explicitly modelled in a network that is trained end to end via backpropagation. As an exemplar application, we show how to train a network with encoder-decoder architecture that learns to estimate BRDF parameters from a single image in an unsupervised manner.
Cite
Text
Yu and Smith. "PVNN: A Neural Network Library for Photometric Vision." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.69Markdown
[Yu and Smith. "PVNN: A Neural Network Library for Photometric Vision." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/yu2017iccvw-pvnn/) doi:10.1109/ICCVW.2017.69BibTeX
@inproceedings{yu2017iccvw-pvnn,
title = {{PVNN: A Neural Network Library for Photometric Vision}},
author = {Yu, Ye and Smith, William A. P.},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2017},
pages = {526-535},
doi = {10.1109/ICCVW.2017.69},
url = {https://mlanthology.org/iccvw/2017/yu2017iccvw-pvnn/}
}