An Optimistic Acceleration of AMSGrad for Nonconvex Optimization
Abstract
We propose a new variant of AMSGrad (Reddi et. al., 2018), a popular adaptive gradient based optimization algorithm widely used for training deep neural networks. Our algorithm adds prior knowledge about the sequence of consecutive mini-batch gradients and leverages its underlying structure making the gradients sequentially predictable. By exploiting the predictability process and ideas from optimistic online learning, the proposed algorithm can accelerate the convergence and increase its sample efficiency. After establishing a tighter upper bound under some convexity conditions on the regret, we offer a complimentary view of our algorithm which generalizes to the offline and stochastic nonconvex optimization settings. In the nonconvex case, we establish a non-asymptotic convergence bound independent of the initialization. We illustrate, via numerical experiments, the practical speedup on several deep learning models and benchmark datasets.
Cite
Text
Wang et al. "An Optimistic Acceleration of AMSGrad for Nonconvex Optimization." Proceedings of The 13th Asian Conference on Machine Learning, 2021.Markdown
[Wang et al. "An Optimistic Acceleration of AMSGrad for Nonconvex Optimization." Proceedings of The 13th Asian Conference on Machine Learning, 2021.](https://mlanthology.org/acml/2021/wang2021acml-optimistic/)BibTeX
@inproceedings{wang2021acml-optimistic,
title = {{An Optimistic Acceleration of AMSGrad for Nonconvex Optimization}},
author = {Wang, Jun-Kun and Li, Xiaoyun and Karimi, Belhal and Li, Ping},
booktitle = {Proceedings of The 13th Asian Conference on Machine Learning},
year = {2021},
pages = {422-437},
volume = {157},
url = {https://mlanthology.org/acml/2021/wang2021acml-optimistic/}
}