Random Scaling and Momentum for Non-Smooth Non-Convex Optimization

Abstract

Training neural networks requires optimizing a loss function that may be highly irregular, and in particular neither convex nor smooth. Popular training algorithms are based on stochastic gradient descent with momentum (SGDM), for which classical analysis applies only if the loss is either convex or smooth. We show that a very small modification to SGDM closes this gap: simply scale the update at each time point by an exponentially distributed random scalar. The resulting algorithm achieves optimal convergence guarantees. Intriguingly, this result is not derived by a specific analysis of SGDM: instead, it falls naturally out of a more general framework for converting online convex optimization algorithms to non-convex optimization algorithms.

Cite

Text

Zhang and Cutkosky. "Random Scaling and Momentum for Non-Smooth Non-Convex Optimization." International Conference on Machine Learning, 2024.

Markdown

[Zhang and Cutkosky. "Random Scaling and Momentum for Non-Smooth Non-Convex Optimization." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/zhang2024icml-random/)

BibTeX

@inproceedings{zhang2024icml-random,
  title     = {{Random Scaling and Momentum for Non-Smooth Non-Convex Optimization}},
  author    = {Zhang, Qinzi and Cutkosky, Ashok},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {58780-58799},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/zhang2024icml-random/}
}