Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning

Abstract

Large-scale machine learning systems often involve data distributed across a collection of users. Federated learning algorithms leverage this structure by communicating model updates to a central server, rather than entire datasets. In this paper, we study stochastic optimization algorithms for a personalized federated learning setting involving local and global models subject to user-level (joint) differential privacy. While learning a private global model induces a cost of privacy, local learning is perfectly private. We provide generalization guarantees showing that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy. We illustrate our theoretical results with experiments on synthetic and real-world datasets.

Cite

Text

Bietti et al. "Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning." International Conference on Machine Learning, 2022.

Markdown

[Bietti et al. "Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/bietti2022icml-personalization/)

BibTeX

@inproceedings{bietti2022icml-personalization,
  title     = {{Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning}},
  author    = {Bietti, Alberto and Wei, Chen-Yu and Dudik, Miroslav and Langford, John and Wu, Steven},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {1945-1962},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/bietti2022icml-personalization/}
}