Competitive Gradient Optimization

Abstract

We study the problem of convergence to a stationary point in zero-sum games. We propose competitive gradient optimization (CGO), a gradient-based method that incorporates the interactions between two players in zero-sum games for its iterative updates. We provide a continuous-time analysis of CGO and its convergence properties while showing that in the continuous limit, previous methods degenerate to their gradient descent ascent (GDA) variants. We further provide a rate of convergence to stationary points in the discrete-time setting. We propose a generalized class of $\alpha$-coherent functions and show that for strictly $\alpha$-coherent functions, CGO ensures convergence to a saddle point. Moreover, we propose optimistic CGO (oCGO), an optimistic variant, for which we show a convergence rate of $O(\frac{1}{n})$ to saddle points for $\alpha$-coherent functions.

Cite

Text

Vyas et al. "Competitive Gradient Optimization." International Conference on Machine Learning, 2023.

Markdown

[Vyas et al. "Competitive Gradient Optimization." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/vyas2023icml-competitive/)

BibTeX

@inproceedings{vyas2023icml-competitive,
  title     = {{Competitive Gradient Optimization}},
  author    = {Vyas, Abhijeet and Bullins, Brian and Azizzadenesheli, Kamyar},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {35243-35276},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/vyas2023icml-competitive/}
}