From Differential Privacy to Bounds on Membership Inference: Less Can Be More

Abstract

Differential Privacy (DP) is the de facto standard for reasoning about the privacy of a training algorithm. Yet, learning with DP often yields poor performance unless one trains on a large dataset. In this paper, we instead outline how training on less data can be beneficial when we are only interested in defending against specific attacks; we take the canonical example of defending against membership inference. To arrive at this result, we first derive (tight) bounds on the success of all membership inference attacks. These bounds do not replace DP, rather they introduce a complementary interpretation of a DP algorithm's ability to defend against membership inference specifically. Because our bound more tightly captures the effect of how training data was selected, we can show that decreasing the sampling rate when constructing the training dataset has a disparate effect on the bound when compared to strengthening the DP guarantee. Thus, when the privacy protection we care about is defending against membership inference, training on less data can yield more advantageous trade-offs between preventing membership inference and utility than strengthening the DP guarantee. We empirically illustrate this on MNIST, CIFAR10 and SVHN-extended.

Cite

Text

Thudi et al. "From Differential Privacy to Bounds on Membership Inference: Less Can Be More." Transactions on Machine Learning Research, 2024.

Markdown

[Thudi et al. "From Differential Privacy to Bounds on Membership Inference: Less Can Be More." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/thudi2024tmlr-differential/)

BibTeX

@article{thudi2024tmlr-differential,
  title     = {{From Differential Privacy to Bounds on Membership Inference: Less Can Be More}},
  author    = {Thudi, Anvith and Shumailov, Ilia and Boenisch, Franziska and Papernot, Nicolas},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/thudi2024tmlr-differential/}
}