$\widetilde{O}(T^{-1})$ Convergence to (coarse) Correlated Equilibria in Full-Information General-Sum Markov Games

Abstract

No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of $\widetilde{O}(T^{-1})$, a significant improvement over the $O(1/\sqrt{T})$ rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find $\widetilde{O}(T^{-1})$-approximate (coarse) correlated equilibria in full-information general-sum Markov games within $T$ iterations. Numerical results are also included to corroborate our theoretical findings.

Cite

Text

Mao et al. "$\widetilde{O}(T^{-1})$ Convergence to (coarse) Correlated Equilibria in Full-Information General-Sum Markov Games." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.

Markdown

[Mao et al. "$\widetilde{O}(T^{-1})$ Convergence to (coarse) Correlated Equilibria in Full-Information General-Sum Markov Games." Proceedings of the 6th Annual Learning for Dynamics & Control Conference, 2024.](https://mlanthology.org/l4dc/2024/mao2024l4dc-convergence/)

BibTeX

@inproceedings{mao2024l4dc-convergence,
  title     = {{$\widetilde{O}(T^{-1})$ Convergence to (coarse) Correlated Equilibria in Full-Information General-Sum Markov Games}},
  author    = {Mao, Weichao and Qiu, Haoran and Wang, Chen and Franke, Hubertus and Kalbarczyk, Zbigniew and Başar, Tamer},
  booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference},
  year      = {2024},
  pages     = {361-374},
  volume    = {242},
  url       = {https://mlanthology.org/l4dc/2024/mao2024l4dc-convergence/}
}