Risk Bounds of Accelerated SGD for Overparameterized Linear Regression
Abstract
Accelerated stochastic gradient descent (ASGD) is a workhorse in deep learning. While existing optimization theory can explain its faster convergence, they fall short in explaining its better generalization. In this paper, we study the generalization of ASGD for overparameterized linear regression. We establish an instance-dependent excess risk bound for ASGD within each eigen-subspace of the data covariance matrix. Our analysis shows that (i) ASGD outperforms SGD in the subspace of small eigenvalues, while in the subspace of large eigenvalues, its bias error decays slower than SGD; and (ii) the variance error of ASGD is always larger than that of SGD. Our result suggests that ASGD can outperform SGD when the difference between the initialization and the true weight vector is mostly confined to the subspace of small eigenvalues.
Cite
Text
Li et al. "Risk Bounds of Accelerated SGD for Overparameterized Linear Regression." NeurIPS 2023 Workshops: OPT, 2023.Markdown
[Li et al. "Risk Bounds of Accelerated SGD for Overparameterized Linear Regression." NeurIPS 2023 Workshops: OPT, 2023.](https://mlanthology.org/neuripsw/2023/li2023neuripsw-risk/)BibTeX
@inproceedings{li2023neuripsw-risk,
title = {{Risk Bounds of Accelerated SGD for Overparameterized Linear Regression}},
author = {Li, Xuheng and Deng, Yihe and Wu, Jingfeng and Zhou, Dongruo and Gu, Quanquan},
booktitle = {NeurIPS 2023 Workshops: OPT},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/li2023neuripsw-risk/}
}