Doubly Optimal No-Regret Learning in Monotone Games

Abstract

We consider online learning in multi-player smooth monotone games. Existing algorithms have limitations such as (1) being only applicable to strongly monotone games; (2) lacking the no-regret guarantee; (3) having only asymptotic or slow $\mathcal{O}(\frac{1}{\sqrt{T}})$ last-iterate convergence rate to a Nash equilibrium. While the $\mathcal{O}(\frac{1}{\sqrt{T}})$ rate is tight for a large class of algorithms including the well-studied extragradient algorithm and optimistic gradient algorithm, it is not optimal for all gradient-based algorithms. We propose the accelerated optimistic gradient (AOG) algorithm, the first doubly optimal no-regret learning algorithm for smooth monotone games. Namely, our algorithm achieves both (i) the optimal $\mathcal{O}(\sqrt{T})$ regret in the adversarial setting under smooth and convex loss functions and (ii) the optimal $\mathcal{O}(\frac{1}{T})$ last-iterate convergence rate to a Nash equilibrium in multi-player smooth monotone games. As a byproduct of the accelerated last-iterate convergence rate, we further show that each player suffers only an $\mathcal{O}(\log T)$ individual worst-case dynamic regret, providing an exponential improvement over the previous state-of-the-art $\mathcal{O}(\sqrt{T})$ bound.

Cite

Text

Cai and Zheng. "Doubly Optimal No-Regret Learning in Monotone Games." International Conference on Machine Learning, 2023.

Markdown

[Cai and Zheng. "Doubly Optimal No-Regret Learning in Monotone Games." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/cai2023icml-doubly/)

BibTeX

@inproceedings{cai2023icml-doubly,
  title     = {{Doubly Optimal No-Regret Learning in Monotone Games}},
  author    = {Cai, Yang and Zheng, Weiqiang},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {3507-3524},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/cai2023icml-doubly/}
}