Last Iterate Convergence of SGD for Least-Squares in the Interpolation Regime.

Abstract

Motivated by the recent successes of neural networks that have the ability to fit the data perfectly \emph{and} generalize well, we study the noiseless model in the fundamental least-squares setup. We assume that an optimum predictor perfectly fits the inputs and outputs $\langle \theta_* , \phi(X) \rangle = Y$, where $\phi(X)$ stands for a possibly infinite dimensional non-linear feature map. To solve this problem, we consider the estimator given by the last iterate of stochastic gradient descent (SGD) with constant step-size. In this context, our contribution is two fold: (i) \emph{from a (stochastic) optimization perspective}, we exhibit an archetypal problem where we can show explicitly the convergence of SGD final iterate for a non-strongly convex problem with constant step-size whereas usual results use some form of average and (ii) \emph{from a statistical perspective}, we give explicit non-asymptotic convergence rates in the over-parameterized setting and leverage a \emph{fine-grained} parameterization of the problem to exhibit polynomial rates that can be faster than $O(1/T)$. The link with reproducing kernel Hilbert spaces is established.

Cite

Text

Varre et al. "Last Iterate Convergence of SGD for Least-Squares in the Interpolation Regime.." Neural Information Processing Systems, 2021.

Markdown

[Varre et al. "Last Iterate Convergence of SGD for Least-Squares in the Interpolation Regime.." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/varre2021neurips-last/)

BibTeX

@inproceedings{varre2021neurips-last,
  title     = {{Last Iterate Convergence of SGD for Least-Squares in the Interpolation Regime.}},
  author    = {Varre, Aditya Vardhan and Pillaud-Vivien, Loucas and Flammarion, Nicolas},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/varre2021neurips-last/}
}