Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels

Abstract

Building models for human facial expression recognition (FER) is made difficult by subjective, ambiguous and noisy annotations. This is especially true when assigning a single emotion class label to facial expressions for large in-the-wild FER datasets. Human facial expressions often contain a mixture of different mental states, which exacerbates the problem of single labels when used to categorize emotions. Dimensional models of affect – such as those using valence and arousal – provide significant advantages over categorical models in terms of representing human emotional states but have remained relatively under-explored. In this paper, we propose an approach for dual-domain affect fusion which investigates the relationships between discrete emotion classes and their continuous representations. In order to address the underlying uncertainty of the labels, we formulate a set of mixed labels via a dual-domain label fusion module to exploit these intrinsic relationships. Finally, we show the benefits of the proposed approach using AffectNet, Aff-Wild, and MorphSet, in the presence of natural and synthetic noise.

Cite

Text

Neo et al. "Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00603

Markdown

[Neo et al. "Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/neo2023cvprw-largescale/) doi:10.1109/CVPRW59228.2023.00603

BibTeX

@inproceedings{neo2023cvprw-largescale,
  title     = {{Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels}},
  author    = {Neo, Dexter and Chen, Tsuhan and Winkler, Stefan},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {5692-5700},
  doi       = {10.1109/CVPRW59228.2023.00603},
  url       = {https://mlanthology.org/cvprw/2023/neo2023cvprw-largescale/}
}