Group Affect Prediction Using Multimodal Distributions
Abstract
We describe our approach towards building an efficient predictive model to detect emotions for a group of people in an image. We have proposed that training a Convolutional Neural Network (CNN) model on the emotion heatmaps extracted from the image, outperforms a CNN model trained entirely on the raw images. The comparison of the models have been done on a recently published dataset of Emotion Recognition in the Wild (EmotiW) challenge, 2017. The proposed method 1 achieved validation accuracy of 55.23% which is 2.44% above the baseline accuracy, provided by the EmotiW organizers.
Cite
Text
Shamsi et al. "Group Affect Prediction Using Multimodal Distributions." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2018. doi:10.1109/WACVW.2018.00015Markdown
[Shamsi et al. "Group Affect Prediction Using Multimodal Distributions." IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2018.](https://mlanthology.org/wacvw/2018/shamsi2018wacvw-group/) doi:10.1109/WACVW.2018.00015BibTeX
@inproceedings{shamsi2018wacvw-group,
title = {{Group Affect Prediction Using Multimodal Distributions}},
author = {Shamsi, Saqib Nizam and Singh, Bhanu Pratap and Wadhwa, Manya},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision Workshops},
year = {2018},
pages = {77-83},
doi = {10.1109/WACVW.2018.00015},
url = {https://mlanthology.org/wacvw/2018/shamsi2018wacvw-group/}
}