SGD Generalizes Better than GD (And Regularization Doesn’t Help)
Abstract
We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that $O(1/\epsilon^2)$ iterations suffice for obtaining a solution with $\epsilon$ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with $\Omega(1)$ generalization error. Moreover, we show that in fact $\Omega(1/\epsilon^4)$ iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.
Cite
Text
Amir et al. "SGD Generalizes Better than GD (And Regularization Doesn’t Help)." Conference on Learning Theory, 2021.Markdown
[Amir et al. "SGD Generalizes Better than GD (And Regularization Doesn’t Help)." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/amir2021colt-sgd/)BibTeX
@inproceedings{amir2021colt-sgd,
title = {{SGD Generalizes Better than GD (And Regularization Doesn’t Help)}},
author = {Amir, Idan and Koren, Tomer and Livni, Roi},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {63-92},
volume = {134},
url = {https://mlanthology.org/colt/2021/amir2021colt-sgd/}
}