Emotional Listener Portrait: Neural Listener Head Generation with Emotion

Abstract

Listener head generation centers on generating non-verbal behaviors (e.g., smile) of a listener in reference to the information delivered by a speaker. A significant challenge when generating such responses is the non-deterministic nature of fine-grained facial expressions during a conversation, which varies depending on the emotions and attitudes of both the speaker and the listener. To tackle this problem, we propose the Emotional Listener Portrait (ELP), which treats each fine-grained facial motion as a composition of several discrete motion-codewords and explicitly models the probability distribution of the motions under different emotional contexts in conversation. Benefiting from the "explicit" and "discrete" design, our ELP model can not only automatically generate natural and diverse responses toward a given speaker via sampling from the learned distribution but also generate controllable responses with a predetermined attitude. Under several quantitative metrics, our ELP exhibits significant improvements compared to previous methods.

Cite

Text

Song et al. "Emotional Listener Portrait: Neural Listener Head Generation with Emotion." International Conference on Computer Vision, 2023.

Markdown

[Song et al. "Emotional Listener Portrait: Neural Listener Head Generation with Emotion." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/song2023iccv-emotional/)

BibTeX

@inproceedings{song2023iccv-emotional,
  title     = {{Emotional Listener Portrait: Neural Listener Head Generation with Emotion}},
  author    = {Song, Luchuan and Yin, Guojun and Jin, Zhenchao and Dong, Xiaoyi and Xu, Chenliang},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {20839-20849},
  url       = {https://mlanthology.org/iccv/2023/song2023iccv-emotional/}
}