Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
Abstract
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.
Cite
Text
Defazio et al. "Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems." International Conference on Machine Learning, 2014.Markdown
[Defazio et al. "Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/defazio2014icml-finito/)BibTeX
@inproceedings{defazio2014icml-finito,
title = {{Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems}},
author = {Defazio, Aaron and Domke, Justin and Caetano, },
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {1125-1133},
volume = {32},
url = {https://mlanthology.org/icml/2014/defazio2014icml-finito/}
}