Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message
Abstract
The shuffle model of differential privacy has attracted attention in the literature due to it being a middle ground between the well-studied central and local models. In this work, we study the problem of summing (aggregating) real numbers or integers, a basic primitive in numerous machine learning tasks, in the shuffle model. We give a protocol achieving error arbitrarily close to that of the (Discrete) Laplace mechanism in central differential privacy, while each user only sends 1 + o(1) short messages in expectation.
Cite
Text
Ghazi et al. "Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message." International Conference on Machine Learning, 2021.Markdown
[Ghazi et al. "Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/ghazi2021icml-differentially/)BibTeX
@inproceedings{ghazi2021icml-differentially,
title = {{Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message}},
author = {Ghazi, Badih and Kumar, Ravi and Manurangsi, Pasin and Pagh, Rasmus and Sinha, Amer},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {3692-3701},
volume = {139},
url = {https://mlanthology.org/icml/2021/ghazi2021icml-differentially/}
}