HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting
Abstract
3D head animation has seen major quality and runtime improvements over the last few years, particularly empowered by the advances in differentiable rendering and neural radiance fields. Real-time rendering is a highly desirable goal for real-world applications. We propose HeadGaS, a model that uses 3D Gaussian Splats (3DGS) for 3D head reconstruction and animation. In this paper we introduce a hybrid model that extends the explicit 3DGS representation with a base of learnable latent features, which can be linearly blended with low-dimensional parameters from parametric head models to obtain expression-dependent color and opacity values. We demonstrate that HeadGaS delivers state-of-the-art results in real-time inference frame rates, surpassing baselines by up to 2 dB, while accelerating rendering speed by over ×10.
Cite
Text
Dhamo et al. "HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72627-9_26Markdown
[Dhamo et al. "HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/dhamo2024eccv-headgas/) doi:10.1007/978-3-031-72627-9_26BibTeX
@inproceedings{dhamo2024eccv-headgas,
title = {{HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting}},
author = {Dhamo, Helisa and Nie, Yinyu and Moreau, Arthur and Song, Jifei and Shaw, Richard and Zhou, Yiren and Pérez-Pellitero, Eduardo},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72627-9_26},
url = {https://mlanthology.org/eccv/2024/dhamo2024eccv-headgas/}
}