Leveraging TCN and Transformer for Effective Visual-Audio Fusion in Continuous Emotion Recognition

Abstract

Human emotion recognition plays an important role in human-computer interaction. In this paper, we present our approach to the Valence-Arousal (VA) Estimation Challenge, Expression (Expr) Classification Challenge, and Action Unit (AU) Detection Challenge of the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW). Specifically, we propose a novel multi-modal fusion model that leverages Temporal Convolutional Networks (TCN) and Transformer to enhance the performance of continuous emotion recognition. Our model aims to effectively integrate visual and audio information for improved accuracy in recognizing emotions. Our model outperforms the baseline and ranks 3 in the Expression Classification challenge.

Cite

Text

Zhou et al. "Leveraging TCN and Transformer for Effective Visual-Audio Fusion in Continuous Emotion Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00610

Markdown

[Zhou et al. "Leveraging TCN and Transformer for Effective Visual-Audio Fusion in Continuous Emotion Recognition." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/zhou2023cvprw-leveraging/) doi:10.1109/CVPRW59228.2023.00610

BibTeX

@inproceedings{zhou2023cvprw-leveraging,
  title     = {{Leveraging TCN and Transformer for Effective Visual-Audio Fusion in Continuous Emotion Recognition}},
  author    = {Zhou, Weiwei and Lu, Jiada and Xiong, Zhaolong and Wang, Weifeng},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {5756-5763},
  doi       = {10.1109/CVPRW59228.2023.00610},
  url       = {https://mlanthology.org/cvprw/2023/zhou2023cvprw-leveraging/}
}