Evaluating Language Model Agency Through Negotiations

Abstract

We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes "lose" to weaker opponents.

Cite

Text

Davidson et al. "Evaluating Language Model Agency Through Negotiations." International Conference on Learning Representations, 2024.

Markdown

[Davidson et al. "Evaluating Language Model Agency Through Negotiations." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/davidson2024iclr-evaluating/)

BibTeX

@inproceedings{davidson2024iclr-evaluating,
  title     = {{Evaluating Language Model Agency Through Negotiations}},
  author    = {Davidson, Tim Ruben and Veselovsky, Veniamin and Kosinski, Michal and West, Robert},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/davidson2024iclr-evaluating/}
}