Center-Based Relaxed Learning Against Membership Inference Attacks

Abstract

Membership inference attacks (MIAs) are currently considered one of the main privacy attack strategies, and their defense mechanisms have also been extensively explored. However, there is still a gap between the existing defense approaches and ideal models in both performance and deployment costs. In particular, we observed that the privacy vulnerability of the model is closely correlated with the gap between the model’s data-memorizing ability and generalization ability. To address it, we propose a new architecture-agnostic training paradigm called Center-based Relaxed Learning (CRL), which is adaptive to any classification model and provides privacy preservation by sacrificing a minimal or no loss of model generalizability. We emphasize that CRL can better maintain the model’s consistency between member and non-member data. Through extensive experiments on common classification datasets, we empirically show that this approach exhibits comparable performance without requiring additional model capacity or data costs.

Cite

Text

Fang and Kim. "Center-Based Relaxed Learning Against Membership Inference Attacks." Uncertainty in Artificial Intelligence, 2024.

Markdown

[Fang and Kim. "Center-Based Relaxed Learning Against Membership Inference Attacks." Uncertainty in Artificial Intelligence, 2024.](https://mlanthology.org/uai/2024/fang2024uai-centerbased/)

BibTeX

@inproceedings{fang2024uai-centerbased,
  title     = {{Center-Based Relaxed Learning Against Membership Inference Attacks}},
  author    = {Fang, Xingli and Kim, Jung-Eun},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2024},
  pages     = {1294-1306},
  volume    = {244},
  url       = {https://mlanthology.org/uai/2024/fang2024uai-centerbased/}
}