Efficient Distributed Learning with Sparsity

Abstract

We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted $\ell_1$ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.

Cite

Text

Wang et al. "Efficient Distributed Learning with Sparsity." International Conference on Machine Learning, 2017.

Markdown

[Wang et al. "Efficient Distributed Learning with Sparsity." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/wang2017icml-efficient/)

BibTeX

@inproceedings{wang2017icml-efficient,
  title     = {{Efficient Distributed Learning with Sparsity}},
  author    = {Wang, Jialei and Kolar, Mladen and Srebro, Nathan and Zhang, Tong},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {3636-3645},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/wang2017icml-efficient/}
}