MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation

Abstract

The synthesis of natural emotional reactions is an essentialcriteria in vivid talking-face video generation. This criteria is nevertheless seldom taken into consideration in previous works due to the absence of a large-scale, high-quality emotional audio-visual dataset. To address this issue, we build the Multi-view Emotional Audio-visual Dataset(MEAD) which is a talking-face video corpus featuring 60 actors and actresses talking with 8 different emotions at 3 different intensity levels. High-quality audio-visual clips are captured at 7 different view angles in a strictly-controlled environment. Together with the dataset, we release an emotional talking-face generation baseline which enables the manipulation of both emotion and its intensity. Our dataset will be made public and could benefit a number of different research fields including conditional generation, cross-modal understanding and expression recognition.

Cite

Text

Loy. "MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58589-1_42

Markdown

[Loy. "MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/loy2020eccv-mead/) doi:10.1007/978-3-030-58589-1_42

BibTeX

@inproceedings{loy2020eccv-mead,
  title     = {{MEAD: A Large-Scale Audio-Visual Dataset for Emotional Talking-Face Generation}},
  author    = {Loy, Kaisiyuan Wang Qianyi Wu Linsen Song Zhuoqian Yang Wayne Wu Chen Qian Ran He Yu Qiao Chen Change},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2020},
  doi       = {10.1007/978-3-030-58589-1_42},
  url       = {https://mlanthology.org/eccv/2020/loy2020eccv-mead/}
}