Iterate Averaging as Regularization for Stochastic Gradient Descent

Abstract

We propose and analyze a variant of the classic Polyak-Ruppert averaging scheme, broadly used in stochastic gradient methods. Rather than a uniform average of the iterates, we consider a weighted average, with weights decaying in a geometric fashion. In the context of linear least squares regression, we show that this averaging scheme has a the same regularizing effect, and indeed is asymptotically equivalent, to ridge regression. In particular, we derive finite-sample bounds for the proposed approach that match the best known results for regularized stochastic gradient methods.

Cite

Text

Neu and Rosasco. "Iterate Averaging as Regularization for Stochastic Gradient Descent." Annual Conference on Computational Learning Theory, 2018.

Markdown

[Neu and Rosasco. "Iterate Averaging as Regularization for Stochastic Gradient Descent." Annual Conference on Computational Learning Theory, 2018.](https://mlanthology.org/colt/2018/neu2018colt-iterate/)

BibTeX

@inproceedings{neu2018colt-iterate,
  title     = {{Iterate Averaging as Regularization for Stochastic Gradient Descent}},
  author    = {Neu, Gergely and Rosasco, Lorenzo},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2018},
  pages     = {3222-3242},
  url       = {https://mlanthology.org/colt/2018/neu2018colt-iterate/}
}