Toward a Unified Theory of Gradient Descent Under Generalized Smoothness

Abstract

We study the classical optimization problem $\min_{x \in \mathbb{R}^d} f(x)$ and analyze the gradient descent (GD) method in both nonconvex and convex settings. It is well-known that, under the $L$–smoothness assumption ($\|\| \nabla^2 f(x) \|\| \leq L$), the optimal point minimizing the quadratic upper bound $f(x_k) + ⟨\nabla f(x_k), x_{k+1} - x_k ⟩+ \frac{L}{2} \|\| x_{k+1} - x_k \|\|^2$ is $x_{k+1} = x_k - \gamma_k \nabla f(x_k)$ with step size $\gamma_k = \frac{1}{L}$. Surprisingly, a similar result can be derived under the $\ell$-generalized smoothness assumption ($\|\| \nabla^2 f(x) \|\| \leq \ell( \|\| \nabla f(x) \|\| )$). In this case, we derive the step size $\gamma_k = \int_{0}^{1} \frac{d v}{\ell( \|\| \nabla f(x_k) \|\| + \|\| \nabla f(x_k) \|\|{v})}.$ Using this step size rule, we improve upon existing theoretical convergence rates and obtain new results in several previously unexplored setups.

Cite

Text

Tyurin. "Toward a Unified Theory of Gradient Descent Under Generalized Smoothness." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Tyurin. "Toward a Unified Theory of Gradient Descent Under Generalized Smoothness." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/tyurin2025icml-unified/)

BibTeX

@inproceedings{tyurin2025icml-unified,
  title     = {{Toward a Unified Theory of Gradient Descent Under Generalized Smoothness}},
  author    = {Tyurin, Alexander},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {60493-60514},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/tyurin2025icml-unified/}
}