Semantic Membership Inference Attack Against Large Language Models
Abstract
Membership Inference Attacks (MIAs) determine whether a specific data point was included in the training set of a target model. In this paper, we introduce the Semantic Membership Inference Attack (SMIA), a novel approach that enhances MIA performance by leveraging the semantic content of inputs and their perturbations. SMIA trains a neural network to analyze the target model’s behavior on perturbed inputs, effectively capturing variations in output probability distributions between members and non-members. We conduct comprehensive evaluations on the Pythia and GPT-Neo model families using the Wikipedia and MIMIR datasets. Our results show that SMIA significantly outperforms existing MIAs; for instance, for Wikipedia, SMIA achieves an AUC-ROC of 67.39% on Pythia-12B, compared to 58.90% by the second-best attack.
Cite
Text
Mozaffari and Marathe. "Semantic Membership Inference Attack Against Large Language Models." NeurIPS 2024 Workshops: SafeGenAi, 2024.Markdown
[Mozaffari and Marathe. "Semantic Membership Inference Attack Against Large Language Models." NeurIPS 2024 Workshops: SafeGenAi, 2024.](https://mlanthology.org/neuripsw/2024/mozaffari2024neuripsw-semantic-a/)BibTeX
@inproceedings{mozaffari2024neuripsw-semantic-a,
title = {{Semantic Membership Inference Attack Against Large Language Models}},
author = {Mozaffari, Hamid and Marathe, Virendra},
booktitle = {NeurIPS 2024 Workshops: SafeGenAi},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/mozaffari2024neuripsw-semantic-a/}
}