Individual Reward Assisted Multi-Agent Reinforcement Learning

Abstract

In many real-world multi-agent systems, the sparsity of team rewards often makes it difficult for an algorithm to successfully learn a cooperative team policy. At present, the common way for solving this problem is to design some dense individual rewards for the agents to guide the cooperation. However, most existing works utilize individual rewards in ways that do not always promote teamwork and sometimes are even counterproductive. In this paper, we propose Individual Reward Assisted Team Policy Learning (IRAT), which learns two policies for each agent from the dense individual reward and the sparse team reward with discrepancy constraints for updating the two policies mutually. Experimental results in different scenarios, such as the Multi-Agent Particle Environment and the Google Research Football Environment, show that IRAT significantly outperforms the baseline methods and can greatly promote team policy learning without deviating from the original team objective, even when the individual rewards are misleading or conflict with the team rewards.

Cite

Text

Wang et al. "Individual Reward Assisted Multi-Agent Reinforcement Learning." International Conference on Machine Learning, 2022.

Markdown

[Wang et al. "Individual Reward Assisted Multi-Agent Reinforcement Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/wang2022icml-individual/)

BibTeX

@inproceedings{wang2022icml-individual,
  title     = {{Individual Reward Assisted Multi-Agent Reinforcement Learning}},
  author    = {Wang, Li and Zhang, Yupeng and Hu, Yujing and Wang, Weixun and Zhang, Chongjie and Gao, Yang and Hao, Jianye and Lv, Tangjie and Fan, Changjie},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {23417-23432},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/wang2022icml-individual/}
}