Understanding Practical Membership Privacy of Deep Learning
Abstract
We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference. In terms of data set properties, we find a strong power law dependence between the number of examples per class in the data and the MIA vulnerability, as measured by true positive rate of the attack at a low false positive rate. For an individual sample, large gradients at the end of training are strongly correlated with MIA vulnerability.
Cite
Text
Tobaben et al. "Understanding Practical Membership Privacy of Deep Learning." ICLR 2024 Workshops: PML, 2024.Markdown
[Tobaben et al. "Understanding Practical Membership Privacy of Deep Learning." ICLR 2024 Workshops: PML, 2024.](https://mlanthology.org/iclrw/2024/tobaben2024iclrw-understanding/)BibTeX
@inproceedings{tobaben2024iclrw-understanding,
title = {{Understanding Practical Membership Privacy of Deep Learning}},
author = {Tobaben, Marlon and Pradhan, Gauri and He, Yuan and Jälkö, Joonas and Honkela, Antti},
booktitle = {ICLR 2024 Workshops: PML},
year = {2024},
url = {https://mlanthology.org/iclrw/2024/tobaben2024iclrw-understanding/}
}