An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
Abstract
We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n^1/2) in unconstrained optimization and O(n^1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.
Cite
Text
Liu et al. "An Asynchronous Parallel Stochastic Coordinate Descent Algorithm." International Conference on Machine Learning, 2014.Markdown
[Liu et al. "An Asynchronous Parallel Stochastic Coordinate Descent Algorithm." International Conference on Machine Learning, 2014.](https://mlanthology.org/icml/2014/liu2014icml-asynchronous/)BibTeX
@inproceedings{liu2014icml-asynchronous,
title = {{An Asynchronous Parallel Stochastic Coordinate Descent Algorithm}},
author = {Liu, Ji and Wright, Steve and Re, Christopher and Bittorf, Victor and Sridhar, Srikrishna},
booktitle = {International Conference on Machine Learning},
year = {2014},
pages = {469-477},
volume = {32},
url = {https://mlanthology.org/icml/2014/liu2014icml-asynchronous/}
}