FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition

Abstract

Using mel-spectrograms over conventional MFCCs features, we assess the abilities of convolutional neural networks to accurately recognize and classify emotions from speech data. We introduce FSER, a speech emotion recognition model trained on four valid speech databases, achieving a high-classification accuracy of 95,05%, over 8 different emotion classes: anger, anxiety, calm, disgust, happiness, neutral, sadness, surprise. On each benchmark dataset, FSER outperforms the best models introduced so far, achieving a state-of-the-art performance. We show that FSER stays reliable, independently of the language, sex identity, and any other external factor. Additionally, we describe how FSER could potentially be used to improve mental and emotional health care and how our analysis and findings serve as guidelines and benchmarks for further works in the same direction.

Cite

Text

Dossou and Gbenou. "FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition." IEEE/CVF International Conference on Computer Vision Workshops, 2021. doi:10.1109/ICCVW54120.2021.00393

Markdown

[Dossou and Gbenou. "FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition." IEEE/CVF International Conference on Computer Vision Workshops, 2021.](https://mlanthology.org/iccvw/2021/dossou2021iccvw-fser/) doi:10.1109/ICCVW54120.2021.00393

BibTeX

@inproceedings{dossou2021iccvw-fser,
  title     = {{FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition}},
  author    = {Dossou, Bonaventure F. P. and Gbenou, Yeno K. S.},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2021},
  pages     = {3526-3531},
  doi       = {10.1109/ICCVW54120.2021.00393},
  url       = {https://mlanthology.org/iccvw/2021/dossou2021iccvw-fser/}
}