Position: LLM Social Simulations Are a Promising Research Method

Abstract

Accurate and verifiable large language model (LLM) simulations of human research subjects promise an accessible data source for understanding human behavior and training new AI systems. However, results to date have been limited, and few social scientists have adopted this method. In this position paper, we argue that the promise of LLM social simulations can be achieved by addressing five tractable challenges. We ground our argument in a review of empirical comparisons between LLMs and human research subjects, commentaries on the topic, and related work. We identify promising directions, including context-rich prompting and fine-tuning with social science datasets. We believe that LLM social simulations can already be used for pilot and exploratory studies, and more widespread use may soon be possible with rapidly advancing LLM capabilities. Researchers should prioritize developing conceptual models and iterative evaluations to make the best use of new AI systems.

Cite

Text

Anthis et al. "Position: LLM Social Simulations Are a Promising Research Method." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Anthis et al. "Position: LLM Social Simulations Are a Promising Research Method." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/anthis2025icml-position/)

BibTeX

@inproceedings{anthis2025icml-position,
  title     = {{Position: LLM Social Simulations Are a Promising Research Method}},
  author    = {Anthis, Jacy Reese and Liu, Ryan and Richardson, Sean M and Kozlowski, Austin C. and Koch, Bernard and Brynjolfsson, Erik and Evans, James and Bernstein, Michael S.},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {81005-81034},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/anthis2025icml-position/}
}