Emergence of Hierarchical Emotion Representations in Large Language Models
Abstract
As large language models (LLMs) increasingly power conversational agents, understanding how they represent, predict, and influence human emotions is crucial for ethical deployment. By analyzing probabilistic dependencies between emotional states in model outputs, we uncover hierarchical structures in LLMs' emotion representations. Our findings show that larger models, such as LLaMA 3.1 (405B parameters), develop more complex hierarchies. We also find that better emotional modeling enhances persuasive abilities in synthetic negotiation tasks, with LLMs that more accurately predict counterparts' emotions achieving superior outcomes. Additionally, we explore how persona biases, such as gender and socioeconomic status, affect emotion recognition, revealing frequent misclassifications of minority personas. This study contributes to both the scientific understanding and ethical considerations of emotion modeling in LLMs.
Cite
Text
Zhao et al. "Emergence of Hierarchical Emotion Representations in Large Language Models." NeurIPS 2024 Workshops: SciForDL, 2024.Markdown
[Zhao et al. "Emergence of Hierarchical Emotion Representations in Large Language Models." NeurIPS 2024 Workshops: SciForDL, 2024.](https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-emergence/)BibTeX
@inproceedings{zhao2024neuripsw-emergence,
title = {{Emergence of Hierarchical Emotion Representations in Large Language Models}},
author = {Zhao, Bo and Okawa, Maya and Bigelow, Eric J and Yu, Rose and Ullman, Tomer and Tanaka, Hidenori},
booktitle = {NeurIPS 2024 Workshops: SciForDL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/zhao2024neuripsw-emergence/}
}