Provable Membership Inference Privacy

Abstract

In applications involving sensitive data, such as finance and healthcare, the necessity for preserving data privacy can be a significant barrier to machine learning model development. Differential privacy (DP) has emerged as one canonical standard for provable privacy. However, DP’s strong theoretical guarantees often come at the cost of a large drop in its utility for machine learning; and DP guarantees themselves are difficult to interpret. In this work, we propose a novel privacy notion, membership inference privacy (MIP), as a step towards addressing these challenges. We give a precise characterization of the relationship between MIP and DP, and show that in some cases, MIP can be achieved using less amount of randomness compared to the amount required for guaranteeing DP, leading to smaller drop in utility. MIP guarantees are also easily interpretable in terms of the success rate of membership inference attacks in a simple random subsampling setting. As a proof of concept, we also provide a simple algorithm for guaranteeing MIP without needing to guarantee DP.

Cite

Text

Izzo et al. "Provable Membership Inference Privacy." Transactions on Machine Learning Research, 2024.

Markdown

[Izzo et al. "Provable Membership Inference Privacy." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/izzo2024tmlr-provable/)

BibTeX

@article{izzo2024tmlr-provable,
  title     = {{Provable Membership Inference Privacy}},
  author    = {Izzo, Zachary and Yoon, Jinsung and Arik, Sercan O and Zou, James},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/izzo2024tmlr-provable/}
}