Against Membership Inference Attack: Pruning Is All You Need
Abstract
The large model size, high computational operations, and vulnerability against membership inference attack (MIA) have impeded deep learning or deep neural networks (DNNs) popularity, especially on mobile devices. To address the challenge, we envision that the weight pruning technique will help DNNs against MIA while reducing model storage and computational operation. In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA and achieves competitive accuracy with the original DNNs. We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.
Cite
Text
Wang et al. "Against Membership Inference Attack: Pruning Is All You Need." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/432Markdown
[Wang et al. "Against Membership Inference Attack: Pruning Is All You Need." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-against/) doi:10.24963/IJCAI.2021/432BibTeX
@inproceedings{wang2021ijcai-against,
title = {{Against Membership Inference Attack: Pruning Is All You Need}},
author = {Wang, Yijue and Wang, Chenghong and Wang, Zigeng and Zhou, Shanglin and Liu, Hang and Bi, Jinbo and Ding, Caiwen and Rajasekaran, Sanguthevar},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {3141-3147},
doi = {10.24963/IJCAI.2021/432},
url = {https://mlanthology.org/ijcai/2021/wang2021ijcai-against/}
}