On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization
Abstract
Adaptive gradient methods are workhorses in deep learning. However, the convergence guarantees of adaptive gradient methods for nonconvex optimization have not been thoroughly studied. In this paper, we provide a fine-grained convergence analysis for a general class of adaptive gradient methods including AMSGrad, RMSProp and AdaGrad. For smooth nonconvex functions, we prove that adaptive gradient methods in expectation converge to a first-order stationary point. Our convergence rate is better than existing results for adaptive gradient methods in terms of dimension. In addition, we also prove high probability bounds on the convergence rates of AMSGrad, RMSProp as well as AdaGrad, which have not been established before. Our analyses shed light on better understanding the mechanism behind adaptive gradient methods in optimizing nonconvex objectives.
Cite
Text
Zhou et al. "On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization." Transactions on Machine Learning Research, 2024.Markdown
[Zhou et al. "On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/zhou2024tmlr-convergence/)BibTeX
@article{zhou2024tmlr-convergence,
title = {{On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization}},
author = {Zhou, Dongruo and Chen, Jinghui and Cao, Yuan and Yang, Ziyan and Gu, Quanquan},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/zhou2024tmlr-convergence/}
}