Virtual Personas for Language Models via an Anthology of Backstories

Abstract

Large language models (LLMs) are trained from vast repositories of text authored by millions of distinct authors, reflecting an enormous diversity of human traits. While these models bear the potential to be used as approximations of human subjects in behavioral studies, prior efforts have been limited in steering model responses to match individual human users. In this work, we introduce Anthology, a method for conditioning LLMs to particular virtual personas by harnessing open-ended life narratives, which we refer to as backstories. We show that our methodology enhances the consistency and reliability of experimental outcomes while ensuring better representation of diverse sub-populations. Across three nationally representative human surveys conducted as part of Pew Research Center's American Trends Panel (ATP), we demonstrate that Anthology achieves up to 18% improvement in matching the response distributions of human respondents and 27% improvement in consistency metrics.

Cite

Text

Moon et al. "Virtual Personas for Language Models via an Anthology of Backstories." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.

Markdown

[Moon et al. "Virtual Personas for Language Models via an Anthology of Backstories." NeurIPS 2024 Workshops: Pluralistic-Alignment, 2024.](https://mlanthology.org/neuripsw/2024/moon2024neuripsw-virtual-a/)

BibTeX

@inproceedings{moon2024neuripsw-virtual-a,
  title     = {{Virtual Personas for Language Models via an Anthology of Backstories}},
  author    = {Moon, Suhong and Abdulhai, Marwa and Kang, Minwoo and Suh, Joseph and Soedarmadji, Widyadewi and Behar, Eran Kohen and Chan, David},
  booktitle = {NeurIPS 2024 Workshops: Pluralistic-Alignment},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/moon2024neuripsw-virtual-a/}
}