Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation

Abstract

Talking face generation has historically struggled to produce head movements and natural facial expressions without guidance from additional reference videos. Recent developments in diffusion-based generative models allow for more realistic and stable data synthesis and their performance on image and video generation has surpassed that of other generative models. In this work, we present an autoregressive diffusion model that requires only one identity image and audio sequence to generate a video of a realistic talking head. Our solution is capable of hallucinating head movements, facial expressions, such as blinks, and preserving a given background. We evaluate our model on two different datasets, achieving state-of-the-art results in expressiveness and smoothness on both of them.

Cite

Text

Stypułkowski et al. "Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Stypułkowski et al. "Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/stypukowski2024wacv-diffused/)

BibTeX

@inproceedings{stypukowski2024wacv-diffused,
  title     = {{Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation}},
  author    = {Stypułkowski, Michał and Vougioukas, Konstantinos and He, Sen and Zięba, Maciej and Petridis, Stavros and Pantic, Maja},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {5091-5100},
  url       = {https://mlanthology.org/wacv/2024/stypukowski2024wacv-diffused/}
}