On Your Mark, Get Set, Warmup!
Abstract
It is common in deep learning to warm up the learning rate $\eta$, often by a linear schedule between $\eta_{\text{init}} = 0$ and a predetermined target $\eta_{\text{trgt}}$. In this paper, we show through systematic experiments using SGD and Adam that the overwhelming benefit of warmup arises from allowing the network to tolerate larger $\eta_{\text{trgt}}$ by forcing the network to more well-conditioned areas of the loss landscape. The ability to handle larger $\eta_{\text{trgt}}$ makes hyperparameter tuning more robust while improving the final performance. We uncover different regimes of operation during the warmup period, depending on whether training starts off in a progressive sharpening or sharpness reduction phase, which in turn depends on the initialization and parameterization. We also suggest an initialization for the variance in Adam which provides benefits similar to warmup.
Cite
Text
Kalra and Barkeshli. "On Your Mark, Get Set, Warmup!." NeurIPS 2024 Workshops: M3L, 2024.Markdown
[Kalra and Barkeshli. "On Your Mark, Get Set, Warmup!." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/kalra2024neuripsw-your/)BibTeX
@inproceedings{kalra2024neuripsw-your,
title = {{On Your Mark, Get Set, Warmup!}},
author = {Kalra, Dayal Singh and Barkeshli, Maissam},
booktitle = {NeurIPS 2024 Workshops: M3L},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/kalra2024neuripsw-your/}
}