DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method

Abstract

This paper proposes a new easy-to-implement parameter-free gradient-based optimizer: DoWG (Distance over Weighted Gradients). We prove that DoWG is efficient---matching the convergence rate of optimally tuned gradient descent in convex optimization up to a logarithmic factor without tuning any parameters, and universal---automatically adapting to both smooth and nonsmooth problems. While popular algorithms following the AdaGrad framework compute a running average of the squared gradients, DoWG maintains a new distance-based weighted version of the running average, which is crucial to achieve the desired properties. To complement our theory, we also show empirically that DoWG trains at the edge of stability, and validate its effectiveness on practical machine learning tasks.

Cite

Text

Khaled et al. "DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method." Neural Information Processing Systems, 2023.

Markdown

[Khaled et al. "DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/khaled2023neurips-dowg/)

BibTeX

@inproceedings{khaled2023neurips-dowg,
  title     = {{DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method}},
  author    = {Khaled, Ahmed and Mishchenko, Konstantin and Jin, Chi},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/khaled2023neurips-dowg/}
}