Revisiting the Initial Steps in Adaptive Gradient Descent Optimization

Abstract

Adaptive gradient optimization methods, such as Adam, are prevalent in training deep neural networks across diverse machine learning tasks due to their ability to achieve faster convergence. However, these methods often suffer from suboptimal generalization compared to stochastic gradient descent (SGD) and exhibit instability, particularly when training Transformer models. In this work, we show the standard initialization of the second-order moment estimation ($v_0 =0$) as a significant factor contributing to these limitations. We introduce simple yet effective solutions: initializing the second-order moment estimation with non-zero values, using either data-driven or random initialization strategies. Empirical evaluations demonstrate that our approach not only stabilizes convergence but also enhances the final performance of adaptive gradient optimizers.

Cite

Text

Abuduweili and Liu. "Revisiting the Initial Steps in Adaptive Gradient Descent Optimization." NeurIPS 2024 Workshops: OPT, 2024.

Markdown

[Abuduweili and Liu. "Revisiting the Initial Steps in Adaptive Gradient Descent Optimization." NeurIPS 2024 Workshops: OPT, 2024.](https://mlanthology.org/neuripsw/2024/abuduweili2024neuripsw-revisiting/)

BibTeX

@inproceedings{abuduweili2024neuripsw-revisiting,
  title     = {{Revisiting the Initial Steps in Adaptive Gradient Descent Optimization}},
  author    = {Abuduweili, Abulikemu and Liu, Changliu},
  booktitle = {NeurIPS 2024 Workshops: OPT},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/abuduweili2024neuripsw-revisiting/}
}