Direct-Search for a Class of Stochastic Min-Max Problems

Abstract

Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods that belong to a class of derivative-free techniques that only access the objective function through an oracle. In this work, we design a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player. We prove convergence of this algorithm under mild assumptions, where the objective of the max-player satisfies the Polyak-Łojasiewicz (PL) condition, while the min-player is characterized by a nonconvex objective. Our method only assumes dynamically adjusted accurate estimates of the oracle with a fixed probability. To the best of our knowledge, our analysis is the first one to address the convergence of a direct-search method for min-max objectives in a stochastic setting.

Cite

Text

Anagnostidis et al. "Direct-Search for a Class of Stochastic Min-Max Problems." Artificial Intelligence and Statistics, 2021.

Markdown

[Anagnostidis et al. "Direct-Search for a Class of Stochastic Min-Max Problems." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/anagnostidis2021aistats-directsearch/)

BibTeX

@inproceedings{anagnostidis2021aistats-directsearch,
  title     = {{Direct-Search for a Class of Stochastic Min-Max Problems}},
  author    = {Anagnostidis, Sotirios-Konstantinos and Lucchi, Aurelien and Diouane, Youssef},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {3772-3780},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/anagnostidis2021aistats-directsearch/}
}