Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions
Abstract
Automatic facial expression recognition (FER) is an important component of affect-aware technologies. Because of the lack of labeled spontaneous data, majority of existing automated FER systems were trained on posed facial expressions, however in real-world applications we deal with (subtle) spontaneous facial expression. This paper introduces an extension of DISFA, a previously released and well-accepted face dataset. Extended DISFA (DISFA+) has the following features: 1) it contains a large set of posed and spontaneous facial expressions data for a same group of individuals, 2) it provides the manually labeled framebased annotations of 5-level intensity of twelve FACS facial actions, 3) it provides meta data (i.e. facial landmark points in addition to the self-report of each individual regarding every posed facial expression). This paper introduces and employs DISFA+, to analyze and compare temporal patterns and dynamic characteristics of posed and spontaneous facial expressions.
Cite
Text
Mavadati et al. "Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016. doi:10.1109/CVPRW.2016.182Markdown
[Mavadati et al. "Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2016.](https://mlanthology.org/cvprw/2016/mavadati2016cvprw-extended/) doi:10.1109/CVPRW.2016.182BibTeX
@inproceedings{mavadati2016cvprw-extended,
title = {{Extended DISFA Dataset: Investigating Posed and Spontaneous Facial Expressions}},
author = {Mavadati, Seyed Mohammad and Sanger, Peyten and Mahoor, Mohammad H.},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2016},
pages = {1452-1459},
doi = {10.1109/CVPRW.2016.182},
url = {https://mlanthology.org/cvprw/2016/mavadati2016cvprw-extended/}
}