Facial Expression Recognition Using a Large Out-of-Context Dataset

Abstract

We develop a method for emotion recognition from facial imagery. This problem is challenging in part because of the subjectivity of ground truth labels and in part because of the relatively small size of existing labeled datasets. We use the FER+ dataset [8], a dataset with multiple emotion labels per image, in order to build an emotion recognition model that encompasses a full range of emotions. Since the amount of data in the FER+ dataset is limited, we explore the use of a much larger face dataset, MS-Celeb-1M [41], in conjunction with the FER+ dataset. Specific layers within an Inception-ResNet-v1 [13, 38] model trained for facial recognition are used for the emotion recognition problem. Thus, we leverage the MS-Celeb-1M dataset in addition to the FER+ dataset and experiment with different architectures to assess the overall performance of neural networks to recognize emotion using facial imagery.

Cite

Text

Tran et al. "Facial Expression Recognition Using a Large Out-of-Context Dataset." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2018. doi:10.1109/WACVW.2018.00012

Markdown

[Tran et al. "Facial Expression Recognition Using a Large Out-of-Context Dataset." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2018.](https://mlanthology.org/wacvw/2018/tran2018wacvw-facial/) doi:10.1109/WACVW.2018.00012

BibTeX

@inproceedings{tran2018wacvw-facial,
  title     = {{Facial Expression Recognition Using a Large Out-of-Context Dataset}},
  author    = {Tran, Elizabeth and Mayhew, Michael B. and Kim, Hyojin and Karande, Piyush and Kaplan, Alan David},
  booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision Workshops},
  year      = {2018},
  pages     = {52-59},
  doi       = {10.1109/WACVW.2018.00012},
  url       = {https://mlanthology.org/wacvw/2018/tran2018wacvw-facial/}
}