Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text

Abstract

It has been a long-time dream for humans to interact with a machine as we would with a person, in a way that it understands us, advises us, and looks after us with no human supervision. Despite being efficient on logical reasoning, current advanced systems lack empathy and user understanding. Estimating the user's emotion could greatly help the machine to identify the user's needs and adapt its behaviour accordingly. This research project aims to develop an automatic emotion recognition system based on facial expression, voice, and words. We expect to address the challenges related to multimodality, data complexity, and emotion representation.

Cite

Text

Tran. "Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/843

Markdown

[Tran. "Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/tran2022ijcai-automatic/) doi:10.24963/IJCAI.2022/843

BibTeX

@inproceedings{tran2022ijcai-automatic,
  title     = {{Automatic Multimodal Emotion Recognition Using Facial Expression, Voice, and Text}},
  author    = {Tran, Hélène},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {5881-5882},
  doi       = {10.24963/IJCAI.2022/843},
  url       = {https://mlanthology.org/ijcai/2022/tran2022ijcai-automatic/}
}