Last Iterate Convergence of Popov Method for Non-Monotone Stochastic Variational Inequalities

Abstract

This paper focuses on non-monotone stochastic variational inequalities (SVIs) that may not have a unique solution. A commonly used efficient algorithm to solve VIs is the Popov method, which is known to have the optimal convergence rate for VIs with Lipschitz continuous and strongly monotone operators. We introduce a broader class of structured non-monotone operators, namely *$p$-quasi-sharp* operators *$p> 0$*, which allows tractably analyzing convergence behavior of algorithms. We show that the stochastic Popov method converges \emph{almost surely} to a solution for all operators from this class under a *linear growth*. In addition, we obtain the last iterate convergence rate (in expectation) for the method under a *linear growth* condition for $2$-quasi-sharp operators. Based on our analysis, we refine the results for smooth $2$-quasi-sharp and $p$-quasi-sharp operators (on a compact set), and obtain the optimal convergence rates.

Cite

Text

Vankov et al. "Last Iterate Convergence of Popov Method for Non-Monotone Stochastic Variational Inequalities." NeurIPS 2023 Workshops: OPT, 2023.

Markdown

[Vankov et al. "Last Iterate Convergence of Popov Method for Non-Monotone Stochastic Variational Inequalities." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/vankov2023neuripsw-last/)

BibTeX

@inproceedings{vankov2023neuripsw-last,
  title     = {{Last Iterate Convergence of Popov Method for Non-Monotone Stochastic Variational Inequalities}},
  author    = {Vankov, Daniil and Nedich, Angelia and Sankar, Lalitha},
  booktitle = {NeurIPS 2023 Workshops: OPT},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/vankov2023neuripsw-last/}
}