Quantifying the Self-Interest Level of Markov Social Dilemmas
Abstract
This paper introduces a novel method for estimating the self-interest level of Markov social dilemmas. We extend the concept of self-interest level from normal-form games to Markov games, providing a quantitative measure of the minimum reward exchange required to align individual and collective interests. We demonstrate our method on three environments from the Melting Pot suite, representing either common-pool resources or public goods. Our results illustrate how reward exchange can enable agents to transition from selfish to collective equilibria in a Markov social dilemma. This work contributes to multi-agent reinforcement learning by providing a practical tool for analysing complex, multistep social dilemmas. Our findings offer insights into how reward structures can promote or hinder cooperation, with potential applications in areas such as mechanism design.
Cite
Text
Willis et al. "Quantifying the Self-Interest Level of Markov Social Dilemmas." International Joint Conference on Artificial Intelligence, 2025. doi:10.24963/IJCAI.2025/33Markdown
[Willis et al. "Quantifying the Self-Interest Level of Markov Social Dilemmas." International Joint Conference on Artificial Intelligence, 2025.](https://mlanthology.org/ijcai/2025/willis2025ijcai-quantifying/) doi:10.24963/IJCAI.2025/33BibTeX
@inproceedings{willis2025ijcai-quantifying,
title = {{Quantifying the Self-Interest Level of Markov Social Dilemmas}},
author = {Willis, Richard and Du, Yali and Leibo, Joel Z. and Luck, Michael},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2025},
pages = {284-292},
doi = {10.24963/IJCAI.2025/33},
url = {https://mlanthology.org/ijcai/2025/willis2025ijcai-quantifying/}
}