Preface: A Data-Driven Volumetric Prior for Few-Shot Ultra High-Resolution Face Synthesis
Abstract
NeRFs have enabled highly realistic synthesis of human faces including complex appearance and reflectance effects of hair and skin. These methods typically require a large number of multi-view input images, making the process hardware intensive and cumbersome, limiting applicability to unconstrained settings. We propose a novel volumetric human face prior that enables the synthesis of ultra high-resolution novel views of subjects that are not part of the prior's training distribution. This prior model consists of an identity-conditioned NeRF, trained on a dataset of low-resolution multi-view images of diverse humans with known camera calibration. A simple sparse landmark-based 3D alignment of the training dataset allows our model to learn a smooth latent space of geometry and appearance despite a limited number of training identities. A high-quality volumetric representation of a novel subject can be obtained by model fitting to 2 or 3 camera views of arbitrary resolution. Importantly, our method requires as few as two views of casually captured images as input at inference time.
Cite
Text
Bühler et al. "Preface: A Data-Driven Volumetric Prior for Few-Shot Ultra High-Resolution Face Synthesis." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00315Markdown
[Bühler et al. "Preface: A Data-Driven Volumetric Prior for Few-Shot Ultra High-Resolution Face Synthesis." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/buhler2023iccv-preface/) doi:10.1109/ICCV51070.2023.00315BibTeX
@inproceedings{buhler2023iccv-preface,
title = {{Preface: A Data-Driven Volumetric Prior for Few-Shot Ultra High-Resolution Face Synthesis}},
author = {Bühler, Marcel C. and Sarkar, Kripasindhu and Shah, Tanmay and Li, Gengyan and Wang, Daoye and Helminger, Leonhard and Orts-Escolano, Sergio and Lagun, Dmitry and Hilliges, Otmar and Beeler, Thabo and Meka, Abhimitra},
booktitle = {International Conference on Computer Vision},
year = {2023},
pages = {3402-3413},
doi = {10.1109/ICCV51070.2023.00315},
url = {https://mlanthology.org/iccv/2023/buhler2023iccv-preface/}
}