Automated Action Units vs. Expert Raters: Face Off
Abstract
User engagement is an essential component of any application design. Finding reliable methods to forecast continuos engagement can aid in creating adaptive applications like web-based interventions, intelligent student tutoring, the creation of socially intelligent human-robots, etc. In this paper, we compare observational estimates from expert raters to vision-based learning, for estimating user engagement. The vision-based approach uses automated computation of Action Units combined with an RNN. Several data collection techniques have been explored in the past that capture different modalities for engagement from obtaining self-reports and gathering external observations via crowd-sourcing or even trained expert raters. Traditional machine learning approaches discard annotations from inconsistent raters, use rater averages or apply raterspecific weighting schemes. Such approaches often end up throwing away expensive annotations. We introduce a novel approach that exploits the inherent confusion and disagreement in raters annotations to build a scalable engagement estimation model that learns to appropriately weigh subjective behavioral cues. We show that actively modeling the uncertainty, either explicitly from expert raters or from automated estimation with AU, significantly improves prediction over prediction from just the average engagement ratings. Our approach performs significantly better or on par with experts in predicting engagement for a trauma-recovery application.
Cite
Text
Dhamija and Boult. "Automated Action Units vs. Expert Raters: Face Off." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018. doi:10.1109/WACV.2018.00035Markdown
[Dhamija and Boult. "Automated Action Units vs. Expert Raters: Face Off." IEEE/CVF Winter Conference on Applications of Computer Vision, 2018.](https://mlanthology.org/wacv/2018/dhamija2018wacv-automated/) doi:10.1109/WACV.2018.00035BibTeX
@inproceedings{dhamija2018wacv-automated,
title = {{Automated Action Units vs. Expert Raters: Face Off}},
author = {Dhamija, Svati and Boult, Terrance E.},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision},
year = {2018},
pages = {259-268},
doi = {10.1109/WACV.2018.00035},
url = {https://mlanthology.org/wacv/2018/dhamija2018wacv-automated/}
}