Emotion Recognition with Sequential Multi-Task Learning Technique

Abstract

The task of predicting affective information in the wild such as seven basic emotions or action units from human faces has gradually become more interesting due to the accessibility and availability of massive annotated datasets. In this study, we propose a method that utilizes the association between seven basic emotions and twelve action units from the AffWild2 dataset. The method based on the architecture of ResNet50 involves the multi-task learning technique for the incomplete labels of the two tasks. By combining the knowledge for two correlated tasks, both performances are improved by a large margin compared to those with the model employing only one kind of label.

Cite

Text

Thinh et al. "Emotion Recognition with Sequential Multi-Task Learning Technique." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00400

Markdown

[Thinh et al. "Emotion Recognition with Sequential Multi-Task Learning Technique." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/thinh2021iccvw-emotion/) doi:10.1109/ICCVW54120.2021.00400

BibTeX

@inproceedings{thinh2021iccvw-emotion,
  title     = {{Emotion Recognition with Sequential Multi-Task Learning Technique}},
  author    = {Thinh, Phan Tran Dac and Hung, Hoang Manh and Yang, Hyung-Jeong and Kim, Soo-Hyung and Lee, Guee-Sang},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2021},
  pages     = {3586-3589},
  doi       = {10.1109/ICCVW54120.2021.00400},
  url       = {https://mlanthology.org/iccvw/2021/thinh2021iccvw-emotion/}
}