Can Large Language Model Agents Simulate Human Trust Behavior?
Abstract
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in social science and role-playing applications. However, one fundamental question remains: can LLM agents really simulate human behavior? In this paper, we focus on one critical and elemental behavior in human interactions, trust, and investigate whether LLM agents can simulate human trust behavior. We first find that LLM agents generally exhibit trust behavior, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the feasibility of simulating human trust behavior with LLM agents. In addition, we probe the biases of agent trust and differences in agent trust towards other LLM agents and humans. We also explore the intrinsic properties of agent trust under conditions including external manipulations and advanced reasoning strategies. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans beyond value alignment. We further illustrate broader implications of our discoveries for applications where trust is paramount.
Cite
Text
Jia et al. "Can Large Language Model Agents Simulate Human Trust Behavior?." Neural Information Processing Systems, 2024. doi:10.52202/079017-0501Markdown
[Jia et al. "Can Large Language Model Agents Simulate Human Trust Behavior?." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/jia2024neurips-large/) doi:10.52202/079017-0501BibTeX
@inproceedings{jia2024neurips-large,
title = {{Can Large Language Model Agents Simulate Human Trust Behavior?}},
author = {Jia, Feiran and Ye, Ziyu and Lai, Shiyang and Shu, Kai and Gu, Jindong and Bibi, Adel and Hu, Ziniu and Jurgens, David and Evans, James and Torr, Philip H.S. and Ghanem, Bernard and Li, Guohao and Xie, Chengxing and Chen, Canyu},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0501},
url = {https://mlanthology.org/neurips/2024/jia2024neurips-large/}
}