Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization
Abstract
This paper focuses on stochastic proximal gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer and convex constraints. To the best of our knowledge we present the first non-asymptotic convergence bounds for this class of problem. We present two simple stochastic proximal gradient algorithms, for general stochastic and finite-sum optimization problems. In a numerical experiment we compare our algorithms with the current state-of-the-art deterministic algorithm and find our algorithms to exhibit superior convergence.
Cite
Text
Metel and Takeda. "Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization." Journal of Machine Learning Research, 2021.Markdown
[Metel and Takeda. "Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization." Journal of Machine Learning Research, 2021.](https://mlanthology.org/jmlr/2021/metel2021jmlr-stochastic/)BibTeX
@article{metel2021jmlr-stochastic,
title = {{Stochastic Proximal Methods for Non-Smooth Non-Convex Constrained Sparse Optimization}},
author = {Metel, Michael R. and Takeda, Akiko},
journal = {Journal of Machine Learning Research},
year = {2021},
pages = {1-36},
volume = {22},
url = {https://mlanthology.org/jmlr/2021/metel2021jmlr-stochastic/}
}