Stochastic Quasi-Variational Inequalities: Convergence Analysis Beyond Strong Monotonicity

Abstract

Variational Inequality is a well-established framework for Nash equilibrium and saddle-point problems. However, its generalization, Quasi-Variational Inequalities, where the constraint set depends on the decision variable, is less understood, with existing results focused on strongly monotone cases. This paper proposes an extra-gradient method for a class of monotone Stochastic Quasi-Variational Inequality (SQVI) and provides the first convergence rate analysis for the non-strongly monotone setting. Our approach not only advances the theoretical understanding of SQVI but also demonstrates its practical applicability.

Cite

Text

Alizadeh and Jalilzadeh. "Stochastic Quasi-Variational Inequalities: Convergence Analysis Beyond Strong Monotonicity." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[Alizadeh and Jalilzadeh. "Stochastic Quasi-Variational Inequalities: Convergence Analysis Beyond Strong Monotonicity." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/alizadeh2024neuripsw-stochastic/)

BibTeX

@inproceedings{alizadeh2024neuripsw-stochastic,
  title     = {{Stochastic Quasi-Variational Inequalities: Convergence Analysis Beyond Strong Monotonicity}},
  author    = {Alizadeh, Zeinab and Jalilzadeh, Afrooz},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/alizadeh2024neuripsw-stochastic/}
}