BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation
Abstract
Previous works on emotion recognition in conversation (ERC) follow a two-step paradigm, which can be summarized as first producing context-independent features via fine-tuning pretrained language models (PLMs) and then analyzing contextual information and dialogue structure information among the extracted features. However, we discover that this paradigm has several limitations. Accordingly, we propose a novel paradigm, i.e., exploring contextual information and dialogue structure information in the fine-tuning step, and adapting the PLM to the ERC task in terms of input text, classification structure, and training strategy. Furthermore, we develop our model BERT-ERC according to the proposed paradigm, which improves ERC performance in three aspects, namely suggestive text, fine-grained classification module, and two-stage training. Compared to existing methods, BERT-ERC achieves substantial improvement on four datasets, indicating its effectiveness and generalization capability. Besides, we also set up the limited resources scenario and the online prediction scenario to approximate real-world scenarios. Extensive experiments demonstrate that the proposed paradigm significantly outperforms the previous one and can be adapted to various scenes.
Cite
Text
Qin et al. "BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I11.26582Markdown
[Qin et al. "BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/qin2023aaai-bert/) doi:10.1609/AAAI.V37I11.26582BibTeX
@inproceedings{qin2023aaai-bert,
title = {{BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation}},
author = {Qin, Xiangyu and Wu, Zhiyu and Zhang, Tingting and Li, Yanran and Luan, Jian and Wang, Bin and Wang, Li and Cui, Jinshi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {13492-13500},
doi = {10.1609/AAAI.V37I11.26582},
url = {https://mlanthology.org/aaai/2023/qin2023aaai-bert/}
}