Almost Sure Convergence of Stochastic Gradient Methods Under Gradient Domination

Abstract

Stochastic gradient methods are among the most important algorithms in training machine learning problems. While classical assumptions such as strong convexity allow a simple analysis they are rarely satisfied in applications. In recent years, global and local gradient domination properties have shown to be a more realistic replacement of strong convexity. They were proved to hold in diverse settings such as (simple) policy gradient methods in reinforcement learning and training of deep neural networks with analytic activation functions. We prove almost sure convergence rates $f(X_n)-f^*\in o\big( n^{-\frac{1}{4\beta-1}+\epsilon}\big)$ of the last iterate for stochastic gradient descent (with and without momentum) under global and local $\beta$-gradient domination assumptions. The almost sure rates get arbitrarily close to recent rates in expectation. Finally, we demonstrate how to apply our results to the training task in both supervised and reinforcement learning.

Cite

Text

Weissmann et al. "Almost Sure Convergence of Stochastic Gradient Methods Under Gradient Domination." Transactions on Machine Learning Research, 2025.

Markdown

[Weissmann et al. "Almost Sure Convergence of Stochastic Gradient Methods Under Gradient Domination." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/weissmann2025tmlr-almost/)

BibTeX

@article{weissmann2025tmlr-almost,
  title     = {{Almost Sure Convergence of Stochastic Gradient Methods Under Gradient Domination}},
  author    = {Weissmann, Simon and Klein, Sara and Azizian, Waïss and Döring, Leif},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/weissmann2025tmlr-almost/}
}