Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization

Abstract

We study the problem of finding a near-stationary point for smooth minimax optimization. The recently proposed extra anchored gradient (EAG) methods achieve the optimal convergence rate for the convex-concave minimax problem in the deterministic setting. However, the direct extension of EAG to stochastic optimization is not efficient. In this paper, we design a novel stochastic algorithm called Recursive Anchored IteratioN (RAIN). We show that the RAIN achieves near-optimal stochastic first-order oracle (SFO) complexity for stochastic minimax optimization in both convex-concave and strongly-convex-strongly-concave cases. In addition, we extend the idea of RAIN to solve structured nonconvex-nonconcave minimax problem and it also achieves near-optimal SFO complexity.

Cite

Text

Chen and Luo. "Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization." Journal of Machine Learning Research, 2024.

Markdown

[Chen and Luo. "Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization." Journal of Machine Learning Research, 2024.](https://mlanthology.org/jmlr/2024/chen2024jmlr-nearoptimal/)

BibTeX

@article{chen2024jmlr-nearoptimal,
  title     = {{Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization}},
  author    = {Chen, Lesi and Luo, Luo},
  journal   = {Journal of Machine Learning Research},
  year      = {2024},
  pages     = {1-44},
  volume    = {25},
  url       = {https://mlanthology.org/jmlr/2024/chen2024jmlr-nearoptimal/}
}