Stable Video Portraits
Abstract
Rapid advances in the field of generative AI and text-to-image methods in particular have transformed the way we interact with and perceive computer-generated imagery today. In parallel, much progress has been made in 3D face reconstruction, using 3D Morphable Models (3DMM). In this paper, we present , a novel hybrid 2D/3D generation method that outputs photorealistic videos of talking faces leveraging a large pre-trained text-to-image prior (2D), controlled via a 3DMM (3D). Specifically, we introduce a person-specific fine-tuning of a general 2D stable diffusion model which we lift to a video model by providing temporal 3DMM sequences as conditioning and by introducing a temporal denoising procedure. As an output, this model generates temporally smooth imagery of a person with 3DMM-based controls, i.e., a person-specific avatar. The facial appearance of this person-specific avatar can be edited and morphed to text-defined celebrities, without any fine-tuning at test time. The method is analyzed quantitatively and qualitatively, and we show that our method outperforms state-of-the-art monocular head avatar methods. https://svp.is.tue.mpg.de/
Cite
Text
Ostrek and Thies. "Stable Video Portraits." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73013-9_11Markdown
[Ostrek and Thies. "Stable Video Portraits." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/ostrek2024eccv-stable/) doi:10.1007/978-3-031-73013-9_11BibTeX
@inproceedings{ostrek2024eccv-stable,
title = {{Stable Video Portraits}},
author = {Ostrek, Mirela and Thies, Justus},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73013-9_11},
url = {https://mlanthology.org/eccv/2024/ostrek2024eccv-stable/}
}