Animating Face Using Disentangled Audio Representations

Abstract

Previous methods for audio-driven talking head generation assume the input audio to be clean with a neutral tone. As we show empirically, one can easily break these systems by simply adding certain background noise to the utterance or changing its emotional tone (to for example, sad). To make talking head generation robust to such variations, we propose an explicit audio representation learning framework that disentangles audio sequences into various factors such as phonetic content, emotional tone, background noise and others. We conduct experiments to validate that when conditioned on disentangled content representation, the generated mouth movement by our model is significantly more accurate than previous approaches (without disentangled learning) in the presence of noise and emotional variations. We further demonstrate that our framework is compatible with current state-of-the-art approaches by replacing their original component to learn audio based representation with ours. To the best of our knowledge, this is the first work which improves the performance of talking head generation through a disentangled audio representation perspective, which is important for many real-world applications.

Cite

Text

Mittal and Wang. "Animating Face Using Disentangled Audio Representations." Winter Conference on Applications of Computer Vision, 2020.

Markdown

[Mittal and Wang. "Animating Face Using Disentangled Audio Representations." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/mittal2020wacv-animating/)

BibTeX

@inproceedings{mittal2020wacv-animating,
  title     = {{Animating Face Using Disentangled Audio Representations}},
  author    = {Mittal, Gaurav and Wang, Baoyuan},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2020},
  url       = {https://mlanthology.org/wacv/2020/mittal2020wacv-animating/}
}