Beyond the Calibration Point: Mechanism Comparison in Differential Privacy
Abstract
In differentially private (DP) machine learning, the privacy guarantees of DP mechanisms are often reported and compared on the basis of a single $(\varepsilon, \delta)$-pair. This practice overlooks that DP guarantees can vary substantially even between mechanisms sharing a given $(\varepsilon, \delta)$, and potentially introduces privacy vulnerabilities which can remain undetected. This motivates the need for robust, rigorous methods for comparing DP guarantees in such cases. Here, we introduce the $\Delta$-divergence between mechanisms which quantifies the worst-case excess privacy vulnerability of choosing one mechanism over another in terms of $(\varepsilon, \delta)$, $f$-DP and in terms of a newly presented Bayesian interpretation. Moreover, as a generalisation of the Blackwell theorem, it is endowed with strong decision-theoretic foundations. Through application examples, we show that our techniques can facilitate informed decision-making and reveal gaps in the current understanding of privacy risks, as current practices in DP-SGD often result in choosing mechanisms with high excess privacy vulnerabilities.
Cite
Text
Kaissis et al. "Beyond the Calibration Point: Mechanism Comparison in Differential Privacy." International Conference on Machine Learning, 2024.Markdown
[Kaissis et al. "Beyond the Calibration Point: Mechanism Comparison in Differential Privacy." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/kaissis2024icml-beyond/)BibTeX
@inproceedings{kaissis2024icml-beyond,
title = {{Beyond the Calibration Point: Mechanism Comparison in Differential Privacy}},
author = {Kaissis, Georgios and Kolek, Stefan and Balle, Borja and Hayes, Jamie and Rueckert, Daniel},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {22840-22860},
volume = {235},
url = {https://mlanthology.org/icml/2024/kaissis2024icml-beyond/}
}