Adding vs. Averaging in Distributed Primal-Dual Optimization

Abstract

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (COCOA) for distributed optimization. Our framework, COCOA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes only allow conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both COCOA as well as our new variants, and generalize the theory for both methods to cover non-smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of COCOA+ on several real-world distributed datasets, especially when scaling up the number of machines.

Cite

Text

Ma et al. "Adding vs. Averaging in Distributed Primal-Dual Optimization." International Conference on Machine Learning, 2015.

Markdown

[Ma et al. "Adding vs. Averaging in Distributed Primal-Dual Optimization." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/ma2015icml-adding/)

BibTeX

@inproceedings{ma2015icml-adding,
  title     = {{Adding vs. Averaging in Distributed Primal-Dual Optimization}},
  author    = {Ma, Chenxin and Smith, Virginia and Jaggi, Martin and Jordan, Michael and Richtarik, Peter and Takac, Martin},
  booktitle = {International Conference on Machine Learning},
  year      = {2015},
  pages     = {1973-1982},
  volume    = {37},
  url       = {https://mlanthology.org/icml/2015/ma2015icml-adding/}
}