Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games

Abstract

Recent success in cooperative multi-agent reinforcement learning (MARL) relies on centralized training and policy sharing. Centralized training eliminates the issue of non-stationarity MARL yet induces large communication costs, and policy sharing is empirically crucial to efficient learning in certain tasks yet lacks theoretical justification. In this paper, we formally characterize a subclass of cooperative Markov games where agents exhibit a certain form of homogeneity such that policy sharing provably incurs no suboptimality. This enables us to develop the first consensus-based decentralized actor-critic method where the consensus update is applied to both the actors and the critics while ensuring convergence. We also develop practical algorithms based on our decentralized actor-critic method to reduce the communication cost during training, while still yielding policies comparable with centralized training.

Cite

Text

Chen et al. "Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games." International Conference on Learning Representations, 2022.

Markdown

[Chen et al. "Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/chen2022iclr-communicationefficient/)

BibTeX

@inproceedings{chen2022iclr-communicationefficient,
  title     = {{Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games}},
  author    = {Chen, Dingyang and Li, Yile and Zhang, Qi},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/chen2022iclr-communicationefficient/}
}