Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems
Abstract
This paper studies the lower bound complexity for minimax optimization problem whose objective function is the average of $n$ individual smooth convex-concave functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component. For the strongly-convex-strongly-concave case, we prove such an algorithm can not reach an $\varepsilon$-suboptimal point in fewer than $\Omega\left((n+\kappa)\log(1/\varepsilon)\right)$ iterations, where $\kappa$ is the condition number of the objective function. This lower bound matches the upper bound of the existing incremental first-order oracle algorithm stochastic variance-reduced extragradient. We develop a novel construction to show the above result, which partitions the tridiagonal matrix of classical examples into $n$ groups. This construction is friendly to the analysis of incremental gradient and proximal oracle and we also extend the analysis to general convex-concave cases.
Cite
Text
Xie et al. "Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems." International Conference on Machine Learning, 2020.Markdown
[Xie et al. "Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/xie2020icml-lower/)BibTeX
@inproceedings{xie2020icml-lower,
title = {{Lower Complexity Bounds for Finite-Sum Convex-Concave Minimax Optimization Problems}},
author = {Xie, Guangzeng and Luo, Luo and Lian, Yijiang and Zhang, Zhihua},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {10504-10513},
volume = {119},
url = {https://mlanthology.org/icml/2020/xie2020icml-lower/}
}