Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention Based Transformer

Abstract

One of the most crucial elements in deeply understanding humans on a psychological level is manifested through facial expressions. The analysis of human behavior can be informed by their facial expressions, making it essential to employ indicators such as expression (EXPR), valence-arousal (VA), and action units (AU). In this paper, we introduce the method proposed in the Challenge of the 6th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW) at CVPR 2024. Our proposed method utilizes the multi-modal Aff-Wild2 dataset, which is split into visual and audio modalities. For the visual data, we extract features using the SimMIM model that was pre-trained on a diverse set of facial expression data. For the audio data, we extract features using the Wav2Vec model. Then, to fuse the extracted visual and audio features, we proposed a cascaded cross-attention mechanism in a transformer. Our approach achieved average F1 scores of 0.4652 and 0.3005 on the AU and the EXPR tracks, respectively, and an average Concordance Correlation Coefficient (CCC) of 0.5077, outperforming the baseline performance on all tracks of the ABAW6 competition. Our approach placed 5th, 6th, and 7th on the AU, the EXPR, and the VA tracks, respectively. The code used in the 6th ABAW competition is available at https://github.com/namho-96/ABAW2024.

Cite

Text

Kim et al. "Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention Based Transformer." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00784

Markdown

[Kim et al. "Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention Based Transformer." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/kim2024cvprw-advanced/) doi:10.1109/CVPRW63382.2024.00784

BibTeX

@inproceedings{kim2024cvprw-advanced,
  title     = {{Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention Based Transformer}},
  author    = {Kim, Jun-Hwa and Kim, Namho and Hong, Minsoo and Won, Chee Sun},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {7870-7877},
  doi       = {10.1109/CVPRW63382.2024.00784},
  url       = {https://mlanthology.org/cvprw/2024/kim2024cvprw-advanced/}
}