Communication Efficient Federated Learning with Secure Aggregation and Differential Privacy

Abstract

Optimizing the \puc tradeoff is a key challenge for federated learning. Under distributed differential privacy (DP) via secure aggregation (SecAgg), we prove that the worst-case communication cost per client must be at least $\Omega\left( d \log \left( \frac{n^2\varepsilon^2}{d} \right) \right)$ to achieve $O\left( \frac{d}{n^2\varepsilon^2} \right)$ centralized error, which matches the error under central DP. Despite this bound, we leverage the near-sparse structure of model updates, evidenced through recent empirical studies, to obtain improved tradeoffs for distributed \DP. In particular, we leverage linear compression methods, namely sketching, to attain compression rates of up to $50\times$ with no significant decrease in model test accuracy achieving a noise multiplier $0.5$. Our work demonstrates that fundamental tradeoffs in differentially private federated learning can be drastically improved in practice.

Cite

Text

Chen et al. "Communication Efficient Federated Learning with Secure Aggregation and Differential Privacy." NeurIPS 2021 Workshops: PRIML, 2021.

Markdown

[Chen et al. "Communication Efficient Federated Learning with Secure Aggregation and Differential Privacy." NeurIPS 2021 Workshops: PRIML, 2021.](https://mlanthology.org/neuripsw/2021/chen2021neuripsw-communication/)

BibTeX

@inproceedings{chen2021neuripsw-communication,
  title     = {{Communication Efficient Federated Learning with Secure Aggregation and Differential Privacy}},
  author    = {Chen, Wei-Ning and Choquette-Choo, Christopher A. and Kairouz, Peter},
  booktitle = {NeurIPS 2021 Workshops: PRIML},
  year      = {2021},
  url       = {https://mlanthology.org/neuripsw/2021/chen2021neuripsw-communication/}
}