One-Round Communication Efficient Distributed M-Estimation

Abstract

Communication cost and local computation complexity are two main bottlenecks of the distributed statistical learning. In this paper, we consider the distributed M-estimation problem in both regular and sparse case and propose a novel one-round communication efficient algorithm. For regular distributed M-estimator, the asymptotic normality is provided to conduct statistical inference. For sparse distributed M-estimator, we only require solving a quadratic Lasso problem in the master machine using the same local information as the regular distributed M-estimator. Consequently, the computation complexity of the local machine is sufficiently reduced compared with the existing debiased sparse estimator. Under mild conditions, the theoretical results guarantee that our proposed distributed estimators achieve (near)optimal statistical convergence rate. The effectiveness of our proposed algorithm is verified through experiments across different M-estimation problems using both synthetic and real benchmark datasets.

Cite

Text

Bao and Xiong. "One-Round Communication Efficient Distributed M-Estimation." Artificial Intelligence and Statistics, 2021.

Markdown

[Bao and Xiong. "One-Round Communication Efficient Distributed M-Estimation." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/bao2021aistats-oneround/)

BibTeX

@inproceedings{bao2021aistats-oneround,
  title     = {{One-Round Communication Efficient Distributed M-Estimation}},
  author    = {Bao, Yajie and Xiong, Weijia},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {46-54},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/bao2021aistats-oneround/}
}