A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach
Abstract
In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of these algorithms admit a unified analysis as approximations of the classical proximal point method for solving saddle point problems. This viewpoint enables us to develop a new framework for analyzing EG and OGDA for bilinear and strongly convex-strongly concave settings. Moreover, we use the proximal point approximation interpretation to generalize the results for OGDA for a wide range of parameters.
Cite
Text
Mokhtari et al. "A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach." Artificial Intelligence and Statistics, 2020.Markdown
[Mokhtari et al. "A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/mokhtari2020aistats-unified/)BibTeX
@inproceedings{mokhtari2020aistats-unified,
title = {{A Unified Analysis of Extra-Gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach}},
author = {Mokhtari, Aryan and Ozdaglar, Asuman and Pattathil, Sarath},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {1497-1507},
volume = {108},
url = {https://mlanthology.org/aistats/2020/mokhtari2020aistats-unified/}
}