Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums

Abstract

In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value. Even in the deterministic case, the optimal method (OGM-G, due to Kim and Fessler (2021)) has just been discovered recently. In this work, we conduct a systematic study of algorithmic techniques for finding near-stationary points of convex finite-sums. Our main contributions are several algorithmic discoveries: (1) we discover a memory-saving variant of OGM-G based on the performance estimation problem approach (Drori and Teboulle, 2014); (2) we design a new accelerated SVRG variant that can simultaneously achieve fast rates for minimizing both the gradient norm and function value; (3) we propose an adaptively regularized accelerated SVRG variant, which does not require the knowledge of some unknown initial constants and achieves near-optimal complexities. We put an emphasis on the simplicity and practicality of the new schemes, which could facilitate future work.

Cite

Text

Zhou et al. "Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums." Artificial Intelligence and Statistics, 2022.

Markdown

[Zhou et al. "Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums." Artificial Intelligence and Statistics, 2022.](https://mlanthology.org/aistats/2022/zhou2022aistats-practical/)

BibTeX

@inproceedings{zhou2022aistats-practical,
  title     = {{Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums}},
  author    = {Zhou, Kaiwen and Tian, Lai and Man-Cho So, Anthony and Cheng, James},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2022},
  pages     = {3684-3708},
  volume    = {151},
  url       = {https://mlanthology.org/aistats/2022/zhou2022aistats-practical/}
}