Joint Patch and Multi-Label Learning for Facial Action Unit Detection
Abstract
The face is one of the most powerful channel of non-verbal communication. The most commonly used taxonomy to describe facial behaviour is the Facial Action Coding System (FACS). FACS segments the visible effects of facial muscle activation into 30+ action units (AUs). AUs, which may occur alone and in thousands of combinations, can describe nearly all-possible facial expressions. Most existing methods for automatic AU detection treat the problem using one-vs-all classifiers and fail to exploit dependencies among AU and facial features. We introduce joint-patch and multi-label learning (JPML) to address these issues. JPML leverages group sparsity by selecting a sparse subset of facial patches while learning a multi-label classifier. In four of five comparisons on three diverse datasets, CK+, GFT, and BP4D, JPML produced the highest average F1 scores in comparison with state-of-the art.
Cite
Text
Zhao et al. "Joint Patch and Multi-Label Learning for Facial Action Unit Detection." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298833Markdown
[Zhao et al. "Joint Patch and Multi-Label Learning for Facial Action Unit Detection." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/zhao2015cvpr-joint/) doi:10.1109/CVPR.2015.7298833BibTeX
@inproceedings{zhao2015cvpr-joint,
title = {{Joint Patch and Multi-Label Learning for Facial Action Unit Detection}},
author = {Zhao, Kaili and Chu, Wen-Sheng and De la Torre, Fernando and Cohn, Jeffrey F. and Zhang, Honggang},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298833},
url = {https://mlanthology.org/cvpr/2015/zhao2015cvpr-joint/}
}