Federated F-Differential Privacy
Abstract
Federated learning (FL) is a training paradigm where the clients collaboratively learn models by repeatedly sharing information without compromising much on the privacy of their local sensitive data. In this paper, we introduce \emph{federated $f$-differential privacy}, a new notion specifically tailored to the federated setting, based on the framework of Gaussian differential privacy. Federated $f$-differential privacy operates on \emph{record level}: it provides the privacy guarantee on each individual record of one client’s data against adversaries. We then propose a generic private federated learning framework \fedsync that accommodates a large family of state-of-the-art FL algorithms, which provably achieves {federated $f$-differential privacy}. Finally, we empirically demonstrate the trade-off between privacy guarantee and prediction performance for models trained by \fedsync in computer vision tasks.
Cite
Text
Zheng et al. "Federated F-Differential Privacy." Artificial Intelligence and Statistics, 2021.Markdown
[Zheng et al. "Federated F-Differential Privacy." Artificial Intelligence and Statistics, 2021.](https://mlanthology.org/aistats/2021/zheng2021aistats-federated/)BibTeX
@inproceedings{zheng2021aistats-federated,
title = {{Federated F-Differential Privacy}},
author = {Zheng, Qinqing and Chen, Shuxiao and Long, Qi and Su, Weijie},
booktitle = {Artificial Intelligence and Statistics},
year = {2021},
pages = {2251-2259},
volume = {130},
url = {https://mlanthology.org/aistats/2021/zheng2021aistats-federated/}
}