Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
Abstract
Gradient inversion attack (or input recovery from gradient) is an emerging threat to the security and privacy preservation of Federated learning, whereby malicious eavesdroppers or participants in the protocol can recover (partially) the clients' private data. This paper evaluates existing attacks and defenses. We find that some attacks make strong assumptions about the setup. Relaxing such assumptions can substantially weaken these attacks. We then evaluate the benefits of three proposed defense mechanisms against gradient inversion attacks. We show the trade-offs of privacy leakage and data utility of these defense methods, and find that combining them in an appropriate manner makes the attack less effective, even under the original strong assumptions. We also estimate the computation cost of end-to-end recovery of a single image under each evaluated defense. Our findings suggest that the state-of-the-art attacks can currently be defended against with minor data utility loss, as summarized in a list of potential strategies.
Cite
Text
Huang et al. "Evaluating Gradient Inversion Attacks and Defenses in Federated Learning." Neural Information Processing Systems, 2021.Markdown
[Huang et al. "Evaluating Gradient Inversion Attacks and Defenses in Federated Learning." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/huang2021neurips-evaluating/)BibTeX
@inproceedings{huang2021neurips-evaluating,
title = {{Evaluating Gradient Inversion Attacks and Defenses in Federated Learning}},
author = {Huang, Yangsibo and Gupta, Samyak and Song, Zhao and Li, Kai and Arora, Sanjeev},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/huang2021neurips-evaluating/}
}