AdaBelief Optimizer: Adapting Stepsizes by theBelief in Observed Gradients
Abstract
Optimization is at the core of modern deep learning. We propose AdaBelief optimizer to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the "belief" in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer.
Cite
Text
Zhuang et al. "AdaBelief Optimizer: Adapting Stepsizes by theBelief in Observed Gradients." NeurIPS 2020 Workshops: DL-IG, 2020.Markdown
[Zhuang et al. "AdaBelief Optimizer: Adapting Stepsizes by theBelief in Observed Gradients." NeurIPS 2020 Workshops: DL-IG, 2020.](https://mlanthology.org/neuripsw/2020/zhuang2020neuripsw-adabelief/)BibTeX
@inproceedings{zhuang2020neuripsw-adabelief,
title = {{AdaBelief Optimizer: Adapting Stepsizes by theBelief in Observed Gradients}},
author = {Zhuang, Juntang and Tang, Tommy and Tatikonda, Sekhar and Dvornek, Nicha C and Ding, Yifan and Papademetris, Xenophon and Duncan, James s},
booktitle = {NeurIPS 2020 Workshops: DL-IG},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/zhuang2020neuripsw-adabelief/}
}