Last-Iterate Convergence Rates for Min-Max Optimization
Abstract
While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees. Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings, and previous work on global last-iterate convergence rates has been limited to the bilinear and convex-strongly concave settings. In this work, we show that the Hamiltonian Gradient Descent (HGD) algorithm achieves linear convergence in a variety of more general settings, including convex-concave problems that satisfy a “sufficiently bilinear” condition. We also prove similar convergence rates for some parameter settings of the Consensus Optimization (CO) algorithm of Mescheder et al. 2017.
Cite
Text
Abernethy et al. "Last-Iterate Convergence Rates for Min-Max Optimization." International Conference on Learning Representations, 2020.Markdown
[Abernethy et al. "Last-Iterate Convergence Rates for Min-Max Optimization." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/abernethy2020iclr-lastiterate/)BibTeX
@inproceedings{abernethy2020iclr-lastiterate,
title = {{Last-Iterate Convergence Rates for Min-Max Optimization}},
author = {Abernethy, Jacob and Lai, Kevin A. and Wibisono, Andre},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/abernethy2020iclr-lastiterate/}
}