The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-Based Agents
Abstract
Human groups are able to converge to more accurate beliefs through deliberation, even in the presence of polarization and partisan bias - a phenomenon known as the ``wisdom of partisan crowds.'' Large Language Models (LLMs) agents are increasingly being used to simulate human collective behavior, yet few benchmarks exist for evaluating their dynamics against the behavior of human groups. In this paper, we examine the extent to which the wisdom of partisan crowds emerges in groups of LLM-based agents that are prompted to role-play as partisan personas (e.g., Democrat or Republican). We find that they not only display human-like partisan biases, but also converge to more accurate beliefs through deliberation, as humans do. We then identify several factors that interfere with convergence, including the use of chain-of-thought prompting and lack of details in personas. Conversely, fine-tuning on human data appears to enhance convergence. These findings show the potential and limitations of LLM-based agents as a model of human collective intelligence.
Cite
Text
Chuang et al. "The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-Based Agents." ICLR 2024 Workshops: LLMAgents, 2024.Markdown
[Chuang et al. "The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-Based Agents." ICLR 2024 Workshops: LLMAgents, 2024.](https://mlanthology.org/iclrw/2024/chuang2024iclrw-wisdom/)BibTeX
@inproceedings{chuang2024iclrw-wisdom,
title = {{The Wisdom of Partisan Crowds: Comparing Collective Intelligence in Humans and LLM-Based Agents}},
author = {Chuang, Yun-Shiuan and Harlalka, Nikunj and Suresh, Siddharth and Goyal, Agam and Hawkins, Robert D. and Yang, Sijia and Shah, Dhavan V. and Hu, Junjie and Rogers, Timothy T.},
booktitle = {ICLR 2024 Workshops: LLMAgents},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/chuang2024iclrw-wisdom/}
}