SGDR: Stochastic Gradient Descent with Warm Restarts
Abstract
Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at this https URL
Cite
Text
Loshchilov and Hutter. "SGDR: Stochastic Gradient Descent with Warm Restarts." International Conference on Learning Representations, 2017.Markdown
[Loshchilov and Hutter. "SGDR: Stochastic Gradient Descent with Warm Restarts." International Conference on Learning Representations, 2017.](https://mlanthology.org/iclr/2017/loshchilov2017iclr-sgdr/)BibTeX
@inproceedings{loshchilov2017iclr-sgdr,
title = {{SGDR: Stochastic Gradient Descent with Warm Restarts}},
author = {Loshchilov, Ilya and Hutter, Frank},
booktitle = {International Conference on Learning Representations},
year = {2017},
url = {https://mlanthology.org/iclr/2017/loshchilov2017iclr-sgdr/}
}