Auditing Differentially Private Machine Learning: How Private Is Private SGD?

Abstract

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

Cite

Text

Jagielski et al. "Auditing Differentially Private Machine Learning: How Private Is Private SGD?." Neural Information Processing Systems, 2020.

Markdown

[Jagielski et al. "Auditing Differentially Private Machine Learning: How Private Is Private SGD?." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/jagielski2020neurips-auditing/)

BibTeX

@inproceedings{jagielski2020neurips-auditing,
  title     = {{Auditing Differentially Private Machine Learning: How Private Is Private SGD?}},
  author    = {Jagielski, Matthew and Ullman, Jonathan and Oprea, Alina},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/jagielski2020neurips-auditing/}
}