Discovery of Facial Motions Using Deep Machine Perception
Abstract
Deep, intuitive understanding of facial motions has the potential to provide an intelligent facial expression system as well as a unique encoding of the dynamics of facial actions. The most promising existing approaches rely on extracting hand crafted features; and existing approaches typically work best in constrained conditions and do not generalise well to varying environmental conditions which make them poorly suited to applications such as real-time human robot interactions. In this paper, we propose a multi-label deep learning based facial action detector, which along with a linear SVM classifier outperforms state of the art approaches such as HOG and LBP. We show that our approach can be generalized to other datasets by learning inner data structure, encoding facial actions, and providing a hierarchical representation of facial features. Our experimental results also demonstrate the efficiency of using image patches, which results in faster learning convergence while outperforms holistic approaches. We evaluate our proposed frame-work on the DISFA and CK+ datasets.
Cite
Text
Ghasemi et al. "Discovery of Facial Motions Using Deep Machine Perception." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016. doi:10.1109/WACV.2016.7477448Markdown
[Ghasemi et al. "Discovery of Facial Motions Using Deep Machine Perception." IEEE/CVF Winter Conference on Applications of Computer Vision, 2016.](https://mlanthology.org/wacv/2016/ghasemi2016wacv-discovery/) doi:10.1109/WACV.2016.7477448BibTeX
@inproceedings{ghasemi2016wacv-discovery,
title = {{Discovery of Facial Motions Using Deep Machine Perception}},
author = {Ghasemi, Afsaneh and Denman, Simon and Sridharan, Sridha and Fookes, Clinton},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2016},
pages = {1-7},
doi = {10.1109/WACV.2016.7477448},
url = {https://mlanthology.org/wacv/2016/ghasemi2016wacv-discovery/}
}