White-Box vs Black-Box: Bayes Optimal Strategies for Membership Inference
Abstract
Membership inference determines, given a sample and trained parameters of a machine learning model, whether the sample was part of the training set. In this paper, we derive the optimal strategy for membership inference with a few assumptions on the distribution of the parameters. We show that optimal attacks only depend on the loss function, and thus black-box attacks are as good as white-box attacks. As the optimal strategy is not tractable, we provide approximations of it leading to several inference methods, and show that existing membership inference methods are coarser approximations of this optimal strategy. Our membership attacks outperform the state of the art in various settings, ranging from a simple logistic regression to more complex architectures and datasets, such as ResNet-101 and Imagenet.
Cite
Text
Sablayrolles et al. "White-Box vs Black-Box: Bayes Optimal Strategies for Membership Inference." International Conference on Machine Learning, 2019.Markdown
[Sablayrolles et al. "White-Box vs Black-Box: Bayes Optimal Strategies for Membership Inference." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/sablayrolles2019icml-whitebox/)BibTeX
@inproceedings{sablayrolles2019icml-whitebox,
title = {{White-Box vs Black-Box: Bayes Optimal Strategies for Membership Inference}},
author = {Sablayrolles, Alexandre and Douze, Matthijs and Schmid, Cordelia and Ollivier, Yann and Jegou, Herve},
booktitle = {International Conference on Machine Learning},
year = {2019},
pages = {5558-5567},
volume = {97},
url = {https://mlanthology.org/icml/2019/sablayrolles2019icml-whitebox/}
}