Welfare Diplomacy: Benchmarking Language Model Cooperation

Abstract

The growing capabilities and increasingly widespread deployment of AI systems necessitate robust benchmarks for measuring their cooperative capabilities. Unfortunately, most multi-agent benchmarks are either zero-sum or purely cooperative, providing limited opportunities for such measurements. We introduce a general-sum variant of the zero-sum board game Diplomacy—called Welfare Diplomacy—in which players must balance investing in military conquest and domestic welfare. We argue that Welfare Diplomacy facilitates both a clearer assessment of and stronger training incentives for cooperative capabilities. Our contributions are: (1) proposing the Welfare Diplomacy rules and implementing them via an open- source Diplomacy engine; (2) constructing baseline agents using zero-shot prompted language models; and (3) conducting experiments where we find that baselines using state-of-the-art models attain high social welfare but are exploitable. Our work aims to promote societal safety by aiding researchers in developing and assessing multi-agent AI systems. Code to evaluate Welfare Diplomacy and reproduce our experiments is available at https://anonymous.4open.science/r/welfare-diplomacy-72AC.

Cite

Text

Mukobi et al. "Welfare Diplomacy: Benchmarking Language Model Cooperation." NeurIPS 2023 Workshops: SoLaR, 2023.

Markdown

[Mukobi et al. "Welfare Diplomacy: Benchmarking Language Model Cooperation." NeurIPS 2023 Workshops: SoLaR, 2023.](https://mlanthology.org/neuripsw/2023/mukobi2023neuripsw-welfare/)

BibTeX

@inproceedings{mukobi2023neuripsw-welfare,
  title     = {{Welfare Diplomacy: Benchmarking Language Model Cooperation}},
  author    = {Mukobi, Gabriel and Erlebach, Hannah and Lauffer, Niklas and Hammond, Lewis and Chan, Alan and Clifton, Jesse},
  booktitle = {NeurIPS 2023 Workshops: SoLaR},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/mukobi2023neuripsw-welfare/}
}