An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
Abstract
We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate ($1/K$) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is $O(n^{1/2})$ in unconstrained optimization and $O(n^{1/4})$ in the separable- constrained case, where $n$ is the number of variables. We describe results from implementation on 40-core processors.
Cite
Text
Liu et al. "An Asynchronous Parallel Stochastic Coordinate Descent Algorithm." Journal of Machine Learning Research, 2015.Markdown
[Liu et al. "An Asynchronous Parallel Stochastic Coordinate Descent Algorithm." Journal of Machine Learning Research, 2015.](https://mlanthology.org/jmlr/2015/liu2015jmlr-asynchronous/)BibTeX
@article{liu2015jmlr-asynchronous,
title = {{An Asynchronous Parallel Stochastic Coordinate Descent Algorithm}},
author = {Liu, Ji and Wright, Stephen J. and Ré, Christopher and Bittorf, Victor and Sridhar, Srikrishna},
journal = {Journal of Machine Learning Research},
year = {2015},
pages = {285-322},
volume = {16},
url = {https://mlanthology.org/jmlr/2015/liu2015jmlr-asynchronous/}
}