SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives
Abstract
In this work we introduce a new fast incremental gradient method SAGA, in the spirit of SAG, SDCA, MISO and SVRG. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.
Cite
Text
Defazio et al. "SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives." Neural Information Processing Systems, 2014.Markdown
[Defazio et al. "SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/defazio2014neurips-saga/)BibTeX
@inproceedings{defazio2014neurips-saga,
title = {{SAGA: A Fast Incremental Gradient Method with Support for Non-Strongly Convex Composite Objectives}},
author = {Defazio, Aaron and Bach, Francis and Lacoste-Julien, Simon},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {1646-1654},
url = {https://mlanthology.org/neurips/2014/defazio2014neurips-saga/}
}