Privacy Leakage of Adversarial Training Models in Federated Learning Systems

Abstract

Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks. In this work, we further reveal this unsettling property of AT by designing a novel privacy attack that is practically applicable to the privacy-sensitive Federated Learning (FL) systems. Using our method, the attacker can exploit AT models in the FL system to accurately reconstruct users’ private training images even when the training batch size is large. Code is available at https://github.com/zjysteven/PrivayAttack_AT_FL.

Cite

Text

Zhang et al. "Privacy Leakage of Adversarial Training Models in Federated Learning Systems." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00021

Markdown

[Zhang et al. "Privacy Leakage of Adversarial Training Models in Federated Learning Systems." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/zhang2022cvprw-privacy/) doi:10.1109/CVPRW56347.2022.00021

BibTeX

@inproceedings{zhang2022cvprw-privacy,
  title     = {{Privacy Leakage of Adversarial Training Models in Federated Learning Systems}},
  author    = {Zhang, Jingyang and Chen, Yiran and Li, Hai Helen},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {107-113},
  doi       = {10.1109/CVPRW56347.2022.00021},
  url       = {https://mlanthology.org/cvprw/2022/zhang2022cvprw-privacy/}
}