Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study
Abstract
Although gradient descent with momentum is widely used in modern deep learning, a concrete understanding of its effects on the training trajectory still remains elusive. In this work, we empirically show that momentum gradient descent with a large learning rate and learning rate warmup displays large catapults, driving the iterates towards flatter minima than those found by gradient descent. We then provide empirical evidence and theoretical intuition that the large catapult is caused by momentum ``amplifying'' the self-stabilization (Damian et al., 2023).
Cite
Text
Phunyaphibarn et al. "Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study." NeurIPS 2023 Workshops: M3L, 2023.Markdown
[Phunyaphibarn et al. "Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study." NeurIPS 2023 Workshops: M3L, 2023.](https://mlanthology.org/neuripsw/2023/phunyaphibarn2023neuripsw-large/)BibTeX
@inproceedings{phunyaphibarn2023neuripsw-large,
title = {{Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study}},
author = {Phunyaphibarn, Prin and Lee, Junghyun and Wang, Bohan and Zhang, Huishuai and Yun, Chulhee},
booktitle = {NeurIPS 2023 Workshops: M3L},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/phunyaphibarn2023neuripsw-large/}
}