Deep Spatio-Temporal Features for Multimodal Emotion Recognition

Abstract

Automatic emotion recognition has attracted great interest and numerous solutions have been proposed, most of which focus either individually on facial expression or acoustic information. While more recent research has considered multimodal approaches, individual modalities are often combined only by simple fusion at the feature and/or decision-level. In this paper, we introduce a novel approach using 3-dimensional convolutional neural networks (C3Ds) to model the spatio-temporal information, cascaded with multimodal deep-belief networks (DBNs) that can represent the audio and video streams. Experiments conducted on the eNTERFACE multimodal emotion database demonstrate this approach leads to improved multimodal emotion recognition performance and significantly outperforms recent state-of-the-art.

Cite

Text

Tien et al. "Deep Spatio-Temporal Features for Multimodal Emotion Recognition." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017. doi:10.1109/WACV.2017.140

Markdown

[Tien et al. "Deep Spatio-Temporal Features for Multimodal Emotion Recognition." IEEE/CVF Winter Conference on Applications of Computer Vision, 2017.](https://mlanthology.org/wacv/2017/tien2017wacv-deep/) doi:10.1109/WACV.2017.140

BibTeX

@inproceedings{tien2017wacv-deep,
  title     = {{Deep Spatio-Temporal Features for Multimodal Emotion Recognition}},
  author    = {Tien, Dung Nguyen and Thanh, Kien Nguyen and Sridharan, Sridha and Ghasemi, Afsane and Dean, David and Fookes, Clinton},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
  year      = {2017},
  pages     = {1215-1223},
  doi       = {10.1109/WACV.2017.140},
  url       = {https://mlanthology.org/wacv/2017/tien2017wacv-deep/}
}