Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks

Abstract

Collaborative learning (CL) is a distributed learning framework that aims to protect user privacy by allowing users to jointly train a model by sharing their gradient updates only. However, gradient inversion attacks (GIAs), which recover users' training data from shared gradients, impose severe privacy threats to CL. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and perturbation defenses, to defend against the GIAs. Nevertheless, all current defense methods suffer from a poor trade-off between privacy, utility, and efficiency. To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL. Specifically, DGP slightly changes gradient pruning with a stronger privacy guarantee. And DGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that DGP can effectively defend against the most powerful GIAs and reduce the communication cost without sacrificing the model's utility.

Cite

Text

Xue et al. "Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I6.28460

Markdown

[Xue et al. "Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/xue2024aaai-revisiting/) doi:10.1609/AAAI.V38I6.28460

BibTeX

@inproceedings{xue2024aaai-revisiting,
  title     = {{Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks}},
  author    = {Xue, Lulu and Hu, Shengshan and Zhao, Ruizhi and Zhang, Leo Yu and Hu, Shengqing and Sun, Lichao and Yao, Dezhong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {6404-6412},
  doi       = {10.1609/AAAI.V38I6.28460},
  url       = {https://mlanthology.org/aaai/2024/xue2024aaai-revisiting/}
}