Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization

Abstract

Our work focuses on stochastic gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer. Research on this class of problem is quite limited, and until recently no non-asymptotic convergence results have been reported. We present two simple stochastic gradient algorithms, for finite-sum and general stochastic optimization problems, which have superior convergence complexities compared to the current state-of-the-art. We also compare our algorithms’ performance in practice for empirical risk minimization.

Cite

Text

Metel and Takeda. "Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization." International Conference on Machine Learning, 2019.

Markdown

[Metel and Takeda. "Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/metel2019icml-simple/)

BibTeX

@inproceedings{metel2019icml-simple,
  title     = {{Simple Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization}},
  author    = {Metel, Michael and Takeda, Akiko},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {4537-4545},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/metel2019icml-simple/}
}