Unleashing the Power of Randomization in Auditing Differentially Private ML
Abstract
We present a rigorous methodology for auditing differentially private machine learning algorithms by adding multiple carefully designed examples called canaries. We take a first principles approach based on three key components. First, we introduce Lifted Differential Privacy (Lifted DP) that expands the definition of differential privacy to handle randomized datasets. This gives us the freedom to design randomized canaries. Second, we audit Lifted DP by trying to distinguish between the model trained with $K$ canaries versus $K-1$ canaries in the dataset, leaving one canary out. By drawing the canaries i.i.d., Lifted DP can leverage the symmetry in the design and reuse each privately trained model to run multiple statistical tests, one for each canary. Third, we introduce novel confidence intervals that take advantage of the multiple test statistics by adapting to the empirical higher-order correlations. Together, this new recipe demonstrates significant improvements in sample complexity, both theoretically and empirically, using synthetic and real data. Further, recent advances in designing stronger canaries can be readily incorporated into the new framework.
Cite
Text
Pillutla et al. "Unleashing the Power of Randomization in Auditing Differentially Private ML." ICML 2023 Workshops: FL, 2023.Markdown
[Pillutla et al. "Unleashing the Power of Randomization in Auditing Differentially Private ML." ICML 2023 Workshops: FL, 2023.](https://mlanthology.org/icmlw/2023/pillutla2023icmlw-unleashing/)BibTeX
@inproceedings{pillutla2023icmlw-unleashing,
title = {{Unleashing the Power of Randomization in Auditing Differentially Private ML}},
author = {Pillutla, Krishna and Andrew, Galen and Kairouz, Peter and McMahan, Hugh Brendan and Oprea, Alina and Oh, Sewoong},
booktitle = {ICML 2023 Workshops: FL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/pillutla2023icmlw-unleashing/}
}