Antipodes of Label Differential Privacy: PATE and ALIBI

Abstract

We consider the privacy-preserving machine learning (ML) setting where the trained model must satisfy differential privacy (DP) with respect to the labels of the training examples. We propose two novel approaches based on, respectively, the Laplace mechanism and the PATE framework, and demonstrate their effectiveness on standard benchmarks.While recent work by Ghazi et al. proposed Label DP schemes based on a randomized response mechanism, we argue that additive Laplace noise coupled with Bayesian inference (ALIBI) is a better fit for typical ML tasks. Moreover, we show how to achieve very strong privacy levels in some regimes, with our adaptation of the PATE framework that builds on recent advances in semi-supervised learning.We complement theoretical analysis of our algorithms' privacy guarantees with empirical evaluation of their memorization properties. Our evaluation suggests that comparing different algorithms according to their provable DP guarantees can be misleading and favor a less private algorithm with a tighter analysis.Code for implementation of algorithms and memorization attacks is available from https://github.com/facebookresearch/labeldpantipodes.

Cite

Text

Esmaeili et al. "Antipodes of Label Differential Privacy: PATE and ALIBI." Neural Information Processing Systems, 2021.

Markdown

[Esmaeili et al. "Antipodes of Label Differential Privacy: PATE and ALIBI." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/esmaeili2021neurips-antipodes/)

BibTeX

@inproceedings{esmaeili2021neurips-antipodes,
  title     = {{Antipodes of Label Differential Privacy: PATE and ALIBI}},
  author    = {Esmaeili, Mani Malek and Mironov, Ilya and Prasad, Karthik and Shilov, Igor and Tramer, Florian},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/esmaeili2021neurips-antipodes/}
}