Communication-Efficient Distributed Dual Coordinate Ascent

Abstract

Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, COCOA, that uses local computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-batch versions of SGD and SDCA algorithms, COCOA converges to the same .001-accurate solution quality on average 25× as quickly.

Cite

Text

Jaggi et al. "Communication-Efficient Distributed Dual Coordinate Ascent." Neural Information Processing Systems, 2014.

Markdown

[Jaggi et al. "Communication-Efficient Distributed Dual Coordinate Ascent." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/jaggi2014neurips-communicationefficient/)

BibTeX

@inproceedings{jaggi2014neurips-communicationefficient,
  title     = {{Communication-Efficient Distributed Dual Coordinate Ascent}},
  author    = {Jaggi, Martin and Smith, Virginia and Takac, Martin and Terhorst, Jonathan and Krishnan, Sanjay and Hofmann, Thomas and Jordan, Michael I},
  booktitle = {Neural Information Processing Systems},
  year      = {2014},
  pages     = {3068-3076},
  url       = {https://mlanthology.org/neurips/2014/jaggi2014neurips-communicationefficient/}
}