I Know How You Feel: Emotion Recognition with Facial Landmarks

Abstract

Classification of human emotions remains an important and challenging task for many computer vision algorithms, especially in the era of humanoid robots which coexist with humans in their everyday life. Currently proposed methods for emotion recognition solve this task using multi-layered convolutional networks that do not explicitly infer any facial features in the classification phase. In this work, we postulate a fundamentally different approach to solve emotion recognition task that relies on incorporating facial landmarks as a part of the classification loss function. To that end, we extend a recently proposed Deep Alignment Network (DAN), that achieves state-of-the-art results in the recent facial landmark recognition challenge, with a term related to facial features. Thanks to this simple modification, our model called EmotionalDAN is able to outperform state-of-the-art emotion classification methods on two challenging benchmark dataset by up to 5%.

Cite

Text

Tautkute et al. "I Know How You Feel: Emotion Recognition with Facial Landmarks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00246

Markdown

[Tautkute et al. "I Know How You Feel: Emotion Recognition with Facial Landmarks." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/tautkute2018cvprw-know/) doi:10.1109/CVPRW.2018.00246

BibTeX

@inproceedings{tautkute2018cvprw-know,
  title     = {{I Know How You Feel: Emotion Recognition with Facial Landmarks}},
  author    = {Tautkute, Ivona and Trzcinski, Tomasz and Bielski, Adam},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2018},
  pages     = {1878-1880},
  doi       = {10.1109/CVPRW.2018.00246},
  url       = {https://mlanthology.org/cvprw/2018/tautkute2018cvprw-know/}
}