Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors

Abstract

Distinguishing real from fake expressions is an emergent research topic. We propose a new method to rank authenticity of multiple videos from facial activity descriptors, which won the ChaLearn real vs. fake emotion challenge. Two studies with 22 human observers show that our method outperforms humans by a large margin. Further, it shows that our proposed ranking method is superior to direct classification. However, when humans are asked to compare two videos from the same subject and emotion before deciding which is fake or real there is no significant increase in performance compared to classifying each video individually. This suggests that our computer vision model is able to exploit facial attributes that are invisible for humans. The code is available at https://github.com/fsaxen/NIT-ICCV17Challenge.

Cite

Text

Saxen et al. "Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors." IEEE/CVF International Conference on Computer Vision Workshops, 2017. doi:10.1109/ICCVW.2017.363

Markdown

[Saxen et al. "Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors." IEEE/CVF International Conference on Computer Vision Workshops, 2017.](https://mlanthology.org/iccvw/2017/saxen2017iccvw-real/) doi:10.1109/ICCVW.2017.363

BibTeX

@inproceedings{saxen2017iccvw-real,
  title     = {{Real vs. Fake Emotion Challenge: Learning to Rank Authenticity from Facial Activity Descriptors}},
  author    = {Saxen, Frerk and Werner, Philipp and Al-Hamadi, Ayoub},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2017},
  pages     = {3073-3078},
  doi       = {10.1109/ICCVW.2017.363},
  url       = {https://mlanthology.org/iccvw/2017/saxen2017iccvw-real/}
}