Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems
Abstract
This paper studies the uniform convergence and generalization bounds for nonconvex-(strongly)-concave (NC-SC/NC-C) stochastic minimax optimization. We first establish the uniform convergence between the empirical minimax problem and the population minimax problem and show the $\tilde{\mathcal{O}}(d\kappa^2\epsilon^{-2})$ and $\tilde{\mathcal{O}}(d\epsilon^{-4})$ sample complexities respectively for the NC-SC and NC-C settings, where $d$ is the dimension number and $\kappa$ is the condition number. To the best of our knowledge, this is the first uniform convergence result measured by the first-order stationarity in stochastic minimax optimization literature.
Cite
Text
Zhang et al. "Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems." NeurIPS 2022 Workshops: OPT, 2022.Markdown
[Zhang et al. "Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems." NeurIPS 2022 Workshops: OPT, 2022.](https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-uniform/)BibTeX
@inproceedings{zhang2022neuripsw-uniform,
title = {{Uniform Convergence and Generalization for Nonconvex Stochastic Minimax Problems}},
author = {Zhang, Siqi and Hu, Yifan and Zhang, Liang and He, Niao},
booktitle = {NeurIPS 2022 Workshops: OPT},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/zhang2022neuripsw-uniform/}
}