Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side-Information

Abstract

Communication efficient distributed mean estimation is an important primitive that arises in many distributed learning and optimization scenarios such as federated learning. Without any probabilistic assumptions on the underlying data, we study the problem of distributed mean estimation where the server has access to side information. We propose \emph{Wyner-Ziv estimators}, which are efficient and near-optimal when an upper bound for the distance between the side information and the data is known. In a different direction, when there is no knowledge assumed about the distance between side information and the data, we present an alternative Wyner-Ziv estimator that uses correlated sampling. This latter setting offers universal recovery guarantees, and perhaps will be of interest in practice when the number of users is large, where keeping track of the distances between the data and the side information may not be possible.

Cite

Text

Mayekar et al. " Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side-Information ." Artificial Intelligence and Statistics, 2021.

Markdown

[Mayekar et al. " Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side-Information ." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/mayekar2021aistats-wynerziv/)

BibTeX

@inproceedings{mayekar2021aistats-wynerziv,
  title     = {{ Wyner-Ziv Estimators: Efficient Distributed Mean Estimation with Side-Information }},
  author    = {Mayekar, Prathamesh and Theertha Suresh, Ananda and Tyagi, Himanshu},
  booktitle = {Artificial Intelligence and Statistics},
  year      = {2021},
  pages     = {3502-3510},
  volume    = {130},
  url       = {https://mlanthology.org/aistats/2021/mayekar2021aistats-wynerziv/}
}