Faster Rates for Convex-Concave Games
Abstract
We consider the use of no-regret algorithms to compute equilibria for particular classes of convex-concave games. While standard regret bounds would lead to convergence rates on the order of $O(T^{-1/2})$, recent work \citep{RS13,SALS15} has established $O(1/T)$ rates by taking advantage of a particular class of optimistic prediction algorithms. In this work we go further, showing that for a particular class of games one achieves a $O(1/T^2)$ rate, and we show how this applies to the Frank-Wolfe method and recovers a similar bound \citep{D15}. We also show that such no-regret techniques can even achieve a linear rate, $O(\exp(-T))$, for equilibrium computation under additional curvature assumptions.
Cite
Text
Abernethy et al. "Faster Rates for Convex-Concave Games." Annual Conference on Computational Learning Theory, 2018.Markdown
[Abernethy et al. "Faster Rates for Convex-Concave Games." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/abernethy2018colt-faster/)BibTeX
@inproceedings{abernethy2018colt-faster,
title = {{Faster Rates for Convex-Concave Games}},
author = {Abernethy, Jacob D. and Lai, Kevin A. and Levy, Kfir Y. and Wang, Jun-Kun},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2018},
pages = {1595-1625},
url = {https://mlanthology.org/colt/2018/abernethy2018colt-faster/}
}