Optimal Rates for Multi-Pass Stochastic Gradient Methods
Abstract
We analyze the learning properties of the stochastic gradient method when multiple passes over the data and mini-batches are allowed. We study how regularization properties are controlled by the step-size, the number of passes and the mini-batch size. In particular, we consider the square loss and show that for a universal step-size choice, the number of passes acts as a regularization parameter, and optimal finite sample bounds can be achieved by early-stopping. Moreover, we show that larger step-sizes are allowed when considering mini-batches. Our analysis is based on a unifying approach, encompassing both batch and stochastic gradient methods as special cases. As a byproduct, we derive optimal convergence results for batch gradient methods (even in the non-attainable cases).
Cite
Text
Lin and Rosasco. "Optimal Rates for Multi-Pass Stochastic Gradient Methods." Journal of Machine Learning Research, 2017.Markdown
[Lin and Rosasco. "Optimal Rates for Multi-Pass Stochastic Gradient Methods." Journal of Machine Learning Research, 2017.](https://mlanthology.org/jmlr/2017/lin2017jmlr-optimal/)BibTeX
@article{lin2017jmlr-optimal,
title = {{Optimal Rates for Multi-Pass Stochastic Gradient Methods}},
author = {Lin, Junhong and Rosasco, Lorenzo},
journal = {Journal of Machine Learning Research},
year = {2017},
pages = {1-47},
volume = {18},
url = {https://mlanthology.org/jmlr/2017/lin2017jmlr-optimal/}
}