Non-Convex Finite-Sum Optimization via SCSG Methods
Abstract
We develop a class of algorithms, as variants of the stochastically controlled stochastic gradient (SCSG) methods , for the smooth nonconvex finite-sum optimization problem. Only assuming the smoothness of each component, the complexity of SCSG to reach a stationary point with $E \|\nabla f(x)\|^{2}\le \epsilon$ is $O(\min\{\epsilon^{-5/3}, \epsilon^{-1}n^{2/3}\})$, which strictly outperforms the stochastic gradient descent. Moreover, SCSG is never worse than the state-of-the-art methods based on variance reduction and it significantly outperforms them when the target accuracy is low. A similar acceleration is also achieved when the functions satisfy the Polyak-Lojasiewicz condition. Empirical experiments demonstrate that SCSG outperforms stochastic gradient methods on training multi-layers neural networks in terms of both training and validation loss.
Cite
Text
Lei et al. "Non-Convex Finite-Sum Optimization via SCSG Methods." Neural Information Processing Systems, 2017.Markdown
[Lei et al. "Non-Convex Finite-Sum Optimization via SCSG Methods." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/lei2017neurips-nonconvex/)BibTeX
@inproceedings{lei2017neurips-nonconvex,
title = {{Non-Convex Finite-Sum Optimization via SCSG Methods}},
author = {Lei, Lihua and Ju, Cheng and Chen, Jianbo and Jordan, Michael I},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {2348-2358},
url = {https://mlanthology.org/neurips/2017/lei2017neurips-nonconvex/}
}