BlendFields: Few-Shot Example-Driven Facial Modeling
Abstract
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.
Cite
Text
Kania et al. "BlendFields: Few-Shot Example-Driven Facial Modeling." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.00047Markdown
[Kania et al. "BlendFields: Few-Shot Example-Driven Facial Modeling." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/kania2023cvpr-blendfields/) doi:10.1109/CVPR52729.2023.00047BibTeX
@inproceedings{kania2023cvpr-blendfields,
title = {{BlendFields: Few-Shot Example-Driven Facial Modeling}},
author = {Kania, Kacper and Garbin, Stephan J. and Tagliasacchi, Andrea and Estellers, Virginia and Yi, Kwang Moo and Valentin, Julien and Trzciński, Tomasz and Kowalski, Marek},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {404-415},
doi = {10.1109/CVPR52729.2023.00047},
url = {https://mlanthology.org/cvpr/2023/kania2023cvpr-blendfields/}
}