Facial Action Unit Recognition in the Wild with Multi-Task CNN Self-Training for the EmotioNet Challenge
Abstract
Automatic understanding of facial behavior is hampered by factors such as occlusion, illumination, non-frontal head pose, low image resolution, or limitations in labeled training data. The EmotioNet 2020 Challenge addresses these issues through a competition on recognizing facial action units on in-the-wild data. We propose to combine multi-task and self-training to make best use of the small manually / fully labeled and the large weakly / partially labeled training datasets provided by the challenge organizers. With our approach (and without using additional data) we achieve the second place in the 2020 challenge – with a performance gap of only 0.05% to the challenge winner and of 5.9% to the third place. On the 2018 challenge evaluation data our method outperforms all other known results.
Cite
Text
Werner et al. "Facial Action Unit Recognition in the Wild with Multi-Task CNN Self-Training for the EmotioNet Challenge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00213Markdown
[Werner et al. "Facial Action Unit Recognition in the Wild with Multi-Task CNN Self-Training for the EmotioNet Challenge." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/werner2020cvprw-facial/) doi:10.1109/CVPRW50498.2020.00213BibTeX
@inproceedings{werner2020cvprw-facial,
title = {{Facial Action Unit Recognition in the Wild with Multi-Task CNN Self-Training for the EmotioNet Challenge}},
author = {Werner, Philipp and Saxen, Frerk and Al-Hamadi, Ayoub},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {1649-1652},
doi = {10.1109/CVPRW50498.2020.00213},
url = {https://mlanthology.org/cvprw/2020/werner2020cvprw-facial/}
}