Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games
Abstract
We study policy optimization algorithms for computing correlated equilibria in multi-player general-sum Markov Games. Previous results achieve $\tilde{O}(T^{-1/2})$ convergence rate to a correlated equilibrium and an accelerated $\tilde{O}(T^{-3/4})$ convergence rate to the weaker notion of coarse correlated equilibrium. In this paper, we improve both results significantly by providing an uncoupled policy optimization algorithm that attains a near-optimal $\tilde{O}(T^{-1})$ convergence rate for computing a correlated equilibrium. Our algorithm is constructed by combining two main elements (i) smooth value updates and (ii) the \emph{optimistic-follow-the-regularized-leader} algorithm with the log barrier regularizer.
Cite
Text
Cai et al. "Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games." Artificial Intelligence and Statistics, 2024.Markdown
[Cai et al. "Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games." Artificial Intelligence and Statistics, 2024.](https://mlanthology.org/aistats/2024/cai2024aistats-nearoptimal/)BibTeX
@inproceedings{cai2024aistats-nearoptimal,
title = {{Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games}},
author = {Cai, Yang and Luo, Haipeng and Wei, Chen-Yu and Zheng, Weiqiang},
booktitle = {Artificial Intelligence and Statistics},
year = {2024},
pages = {3889-3897},
volume = {238},
url = {https://mlanthology.org/aistats/2024/cai2024aistats-nearoptimal/}
}