Does Momentum Change the Implicit Regularization on Separable Data?
Abstract
The momentum acceleration technique is widely adopted in many optimization algorithms. However, there is no theoretical answer on how the momentum affects the generalization performance of the optimization algorithms. This paper studies this problem by analyzing the implicit regularization of momentum-based optimization. We prove that on the linear classification problem with separable data and exponential-tailed loss, gradient descent with momentum (GDM) converges to the $L^2$ max-margin solution, which is the same as vanilla gradient descent. That means gradient descent with momentum acceleration still converges to a low-complexity model, which guarantees their generalization. We then analyze the stochastic and adaptive variants of GDM (i.e., SGDM and deterministic Adam) and show they also converge to the $L^2$ max-margin solution. Technically, the implicit regularization of SGDM is established based on a novel convergence analysis of SGDM under a general noise condition called affine noise variance condition. To the best of our knowledge, we are the first to derive SGDM’s convergence under such an assumption. Numerical experiments are conducted to support our theoretical results.
Cite
Text
Wang et al. "Does Momentum Change the Implicit Regularization on Separable Data?." Neural Information Processing Systems, 2022.Markdown
[Wang et al. "Does Momentum Change the Implicit Regularization on Separable Data?." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/wang2022neurips-momentum/)BibTeX
@inproceedings{wang2022neurips-momentum,
title = {{Does Momentum Change the Implicit Regularization on Separable Data?}},
author = {Wang, Bohan and Meng, Qi and Zhang, Huishuai and Sun, Ruoyu and Chen, Wei and Ma, Zhi-Ming and Liu, Tie-Yan},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/wang2022neurips-momentum/}
}