High-Probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails

Abstract

We consider non-convex stochastic optimization using first-order algorithms for which the gradient estimates may have heavy tails. We show that a combination of gradient clipping, momentum, and normalized gradient descent yields convergence to critical points in high-probability with best-known rates for smooth losses when the gradients only have bounded $\mathfrak{p}$th moments for some $\mathfrak{p}\in(1,2]$. We then consider the case of second-order smooth losses, which to our knowledge have not been studied in this setting, and again obtain high-probability bounds for any $\mathfrak{p}$. Moreover, our results hold for arbitrary smooth norms, in contrast to the typical SGD analysis which requires a Hilbert space norm. Further, we show that after a suitable "burn-in" period, the objective value will monotonically decrease for every iteration until a critical point is identified, which provides intuition behind the popular practice of learning rate "warm-up'' and also yields a last-iterate guarantee.

Cite

Text

Cutkosky and Mehta. "High-Probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails." Neural Information Processing Systems, 2021.

Markdown

[Cutkosky and Mehta. "High-Probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/cutkosky2021neurips-highprobability/)

BibTeX

@inproceedings{cutkosky2021neurips-highprobability,
  title     = {{High-Probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails}},
  author    = {Cutkosky, Ashok and Mehta, Harsh},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/cutkosky2021neurips-highprobability/}
}