Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games

Abstract

This work studies an independent natural policy gradient (NPG) algorithm for the multi-agent reinforcement learning problem in Markov potential games. It is shown that, under mild technical assumptions and the introduction of the \textit{suboptimality gap}, the independent NPG method with an oracle providing exact policy evaluation asymptotically reaches an $\epsilon$-Nash Equilibrium (NE) within $\mathcal{O}(1/\epsilon)$ iterations. This improves upon the previous best result of $\mathcal{O}(1/\epsilon^2)$ iterations and is of the same order, $\mathcal{O}(1/\epsilon)$, that is achievable for the single-agent case. Empirical results for a synthetic potential game and a congestion game are presented to verify the theoretical bounds.

Cite

Text

Sun et al. "Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games." Neural Information Processing Systems, 2023.

Markdown

[Sun et al. "Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/sun2023neurips-provably/)

BibTeX

@inproceedings{sun2023neurips-provably,
  title     = {{Provably Fast Convergence of Independent Natural Policy Gradient for Markov Potential Games}},
  author    = {Sun, Youbang and Liu, Tao and Zhou, Ruida and Kumar, P. R. and Shahrampour, Shahin},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/sun2023neurips-provably/}
}