Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation

Abstract

In this paper, we present the solution to the Emotional Mimicry Intensity (EMI) Estimation challenge, which is part of 6th Affective Behavior Analysis in-the-wild (ABAW) 2024. The EMI Estimation challenge task aims to evaluate the emotional intensity of seed videos by assessing them from a set of predefined emotion categories (i.e., "Admiration", "Amusement", "Determination", "Empathic Pain", "Excitement" and "Joy"). To tackle this challenge, we extracted rich dual-channel visual features based on ResNet18 and AUs for the video modality and effective single-channel features based on Wav2Vec2.0 for the audio modality. This allowed us to obtain comprehensive emotional features for the audiovisual modality. Additionally, leveraging a late fusion strategy, we averaged the predictions of the visual and acoustic models, resulting in a more accurate estimation of audiovisual emotional mimicry intensity. Experimental results confirmed the effectiveness of our approach, with the average Pearson’s Correlation Coefficient (ρ) of 0.3288 for 6 emotional dimensions in the validation set, and 0.3594 in the test set. Eventually, we achieved third place in the competition.

Cite

Text

Yu et al. "Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00490

Markdown

[Yu et al. "Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/yu2024cvprw-efficient/) doi:10.1109/CVPRW63382.2024.00490

BibTeX

@inproceedings{yu2024cvprw-efficient,
  title     = {{Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation}},
  author    = {Yu, Jun and Zhu, Wangyuan and Zhu, Jichao and Cai, Zhongpeng and Zhao, Gongpeng and Zhang, Zerui and Xie, Guochen and Wei, Zhihong and Liu, Qingsong and Liang, Jiaen},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {4866-4872},
  doi       = {10.1109/CVPRW63382.2024.00490},
  url       = {https://mlanthology.org/cvprw/2024/yu2024cvprw-efficient/}
}