One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them

Abstract

We consider distributed statistical optimization in one-shot setting, where there are $m$ machines each observing $n$ i.i.d. samples. Based on its observed samples, each machine sends a $B$-bit-long message to a server. The server then collects messages from all machines, and estimates a parameter that minimizes an expected convex loss function. We investigate the impact of communication constraint, $B$, on the expected error and derive a tight lower bound on the error achievable by any algorithm. We then propose an estimator, which we call Multi-Resolution Estimator (MRE), whose expected error (when $B\ge d\log mn$ where $d$ is the dimension of parameter) meets the aforementioned lower bound up to a poly-logarithmic factor in $mn$. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines ($m$) goes to infinity, even when the number of samples per machine ($n$) remains upper bounded by a constant. We also address the problem of learning under tiny communication budget, and present lower and upper error bounds for the case that the budget $B$ is a constant.

Cite

Text

Salehkaleybar et al. "One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them." Journal of Machine Learning Research, 2021.

Markdown

[Salehkaleybar et al. "One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them." Journal of Machine Learning Research, 2021.](https://mlanthology.org/jmlr/2021/salehkaleybar2021jmlr-oneshot/)

BibTeX

@article{salehkaleybar2021jmlr-oneshot,
  title     = {{One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them}},
  author    = {Salehkaleybar, Saber and Sharifnassab, Arsalan and Golestani, S. Jamaloddin},
  journal   = {Journal of Machine Learning Research},
  year      = {2021},
  pages     = {1-47},
  volume    = {22},
  url       = {https://mlanthology.org/jmlr/2021/salehkaleybar2021jmlr-oneshot/}
}