Local Convergence Properties of SAGA/Prox-SVRG and Acceleration
Abstract
In this paper, we present a local convergence anal- ysis for a class of stochastic optimisation meth- ods: the proximal variance reduced stochastic gradient methods, and mainly focus on SAGA (Defazio et al., 2014) and Prox-SVRG (Xiao & Zhang, 2014). Under the assumption that the non-smooth component of the optimisation prob- lem is partly smooth relative to a smooth mani- fold, we present a unified framework for the local convergence analysis of SAGA/Prox-SVRG: (i) the sequences generated by the methods are able to identify the smooth manifold in a finite num- ber of iterations; (ii) then the sequence enters a local linear convergence regime. Furthermore, we discuss various possibilities for accelerating these algorithms, including adapting to better lo- cal parameters, and applying higher-order deter- ministic/stochastic optimisation methods which can achieve super-linear convergence. Several concrete examples arising from machine learning are considered to demonstrate the obtained result.
Cite
Text
Poon et al. "Local Convergence Properties of SAGA/Prox-SVRG and Acceleration." International Conference on Machine Learning, 2018.Markdown
[Poon et al. "Local Convergence Properties of SAGA/Prox-SVRG and Acceleration." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/poon2018icml-local/)BibTeX
@inproceedings{poon2018icml-local,
title = {{Local Convergence Properties of SAGA/Prox-SVRG and Acceleration}},
author = {Poon, Clarice and Liang, Jingwei and Schoenlieb, Carola},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {4124-4132},
volume = {80},
url = {https://mlanthology.org/icml/2018/poon2018icml-local/}
}