Status-Quo Policy Gradient in Multi-Agent Reinforcement Learning
Abstract
Individual rationality, which involves maximizing expected individual return, does not always lead to optimal individual or group outcomes in multi-agent problems. For instance, in social dilemma situations, Reinforcement Learning (RL) agents trained to maximize individual rewards converge to mutual defection that is individually and socially sub-optimal. In contrast, humans evolve individual and socially optimal strategies in such social dilemmas. Inspired by ideas from human psychology that attribute this behavior in humans to the status-quo bias, we present a status-quo loss (SQLoss) and the corresponding policy gradient algorithm that incorporates this bias in an RL agent. We demonstrate that agents trained with SQLoss learn individually as well as socially optimal behavior in several social dilemma matrix games. To apply SQLoss to games where cooperation and defection are determined by a sequence of non-trivial actions, we present GameDistill, an algorithm that reduces a multi-step game with visual input to a matrix game. We empirically show how agents trained with SQLoss on GameDistill reduced version of Coin Game and StagHunt evolve optimal policies. Finally, we show that SQLoss extends to a 4-agent setting by demonstrating the emergence of cooperative behavior in the popular Braess' paradox.
Cite
Text
Badjatiya et al. "Status-Quo Policy Gradient in Multi-Agent Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.Markdown
[Badjatiya et al. "Status-Quo Policy Gradient in Multi-Agent Reinforcement Learning." NeurIPS 2021 Workshops: DeepRL, 2021.](https://mlanthology.org/neuripsw/2021/badjatiya2021neuripsw-statusquo/)BibTeX
@inproceedings{badjatiya2021neuripsw-statusquo,
title = {{Status-Quo Policy Gradient in Multi-Agent Reinforcement Learning}},
author = {Badjatiya, Pinkesh and Sarkar, Mausoom and Puri, Nikaash and Subramanian, Jayakumar and Sinha, Abhishek and Singh, Siddharth and Krishnamurthy, Balaji},
booktitle = {NeurIPS 2021 Workshops: DeepRL},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/badjatiya2021neuripsw-statusquo/}
}