RigNeRF: Fully Controllable Neural 3D Portraits
Abstract
Volumetric neural rendering methods, such as neural ra-diance fields (NeRFs), have enabled photo-realistic novel view synthesis. However, in their standard form, NeRFs do not support the editing of objects, such as a human head,within a scene. In this work, we propose RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video. We model changes in head pose and facial expressions using a deformation field that is guided by a 3D morphable face model (3DMM). The 3DMM effectively acts as a prior for RigNeRF that learns to predict only residuals to the 3DMM deformations and allows us to render novel (rigid) poses and (non-rigid) expressions that were not present in the input sequence. Using only a smartphone-captured short video of a subject for training,we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls.
Cite
Text
Athar et al. "RigNeRF: Fully Controllable Neural 3D Portraits." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01972Markdown
[Athar et al. "RigNeRF: Fully Controllable Neural 3D Portraits." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/athar2022cvpr-rignerf/) doi:10.1109/CVPR52688.2022.01972BibTeX
@inproceedings{athar2022cvpr-rignerf,
title = {{RigNeRF: Fully Controllable Neural 3D Portraits}},
author = {Athar, ShahRukh and Xu, Zexiang and Sunkavalli, Kalyan and Shechtman, Eli and Shu, Zhixin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {20364-20373},
doi = {10.1109/CVPR52688.2022.01972},
url = {https://mlanthology.org/cvpr/2022/athar2022cvpr-rignerf/}
}