Relighting4D: Neural Relightable Human from Videos
Abstract
Human relighting is a highly desirable yet challenging task. Existing works either require expensive one-light-at-a-time (OLAT) captured data using light stage or cannot freely change the viewpoints of the rendered body. In this work, we propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations. Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields of normal, occlusion, diffuse, and specular maps. These neural fields are further integrated into reflectance-aware physically based rendering, where each vertex in the neural field absorbs and reflects the light from the environment. The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization. Extensive experiments on both real and synthetic datasets demonstrate that our framework is capable of relighting dynamic human actors with free-viewpoints. Codes are available at https://github.com/FrozenBurning/Relighting4D.
Cite
Text
Chen and Liu. "Relighting4D: Neural Relightable Human from Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19781-9_35Markdown
[Chen and Liu. "Relighting4D: Neural Relightable Human from Videos." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/chen2022eccv-relighting4d/) doi:10.1007/978-3-031-19781-9_35BibTeX
@inproceedings{chen2022eccv-relighting4d,
title = {{Relighting4D: Neural Relightable Human from Videos}},
author = {Chen, Zhaoxi and Liu, Ziwei},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2022},
doi = {10.1007/978-3-031-19781-9_35},
url = {https://mlanthology.org/eccv/2022/chen2022eccv-relighting4d/}
}