Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning
Abstract
Membership inference attacks (MIAs) are used to test practical privacy of machine learning models. MIAs complement formal guarantees from differential privacy (DP) under a more realistic adversary model. We analyse MIA vulnerability of fine-tuned neural networks both empirically and theoretically, the latter using a simplified model of fine-tuning. We show that the vulnerability of non-DP models when measured as the attacker advantage at a fixed false positive rate reduces according to a simple power law as the number of examples per class increases. A similar power-law applies even for the most vulnerable points, but the dataset size needed for adequate protection of the most vulnerable points is very large.
Cite
Text
Tobaben et al. "Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning." Advances in Neural Information Processing Systems, 2025.Markdown
[Tobaben et al. "Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/tobaben2025neurips-impact/)BibTeX
@inproceedings{tobaben2025neurips-impact,
title = {{Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning}},
author = {Tobaben, Marlon and Ito, Hibiki and Jälkö, Joonas and He, Yuan and Honkela, Antti},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/tobaben2025neurips-impact/}
}