Asynchronous Parallel Greedy Coordinate Descent

Abstract

n this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous---each core does not need to idle and wait for the other cores---the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM.

Cite

Text

You et al. "Asynchronous Parallel Greedy Coordinate Descent." Neural Information Processing Systems, 2016.

Markdown

[You et al. "Asynchronous Parallel Greedy Coordinate Descent." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/you2016neurips-asynchronous/)

BibTeX

@inproceedings{you2016neurips-asynchronous,
  title     = {{Asynchronous Parallel Greedy Coordinate Descent}},
  author    = {You, Yang and Lian, Xiangru and Liu, Ji and Yu, Hsiang-Fu and Dhillon, Inderjit S and Demmel, James and Hsieh, Cho-Jui},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {4682-4690},
  url       = {https://mlanthology.org/neurips/2016/you2016neurips-asynchronous/}
}