A Randomized Approach to Tight Privacy Accounting

Abstract

Bounding privacy leakage over compositions, i.e., privacy accounting, is a key challenge in differential privacy (DP). However, the privacy parameter ($\varepsilon$ or $\delta$) is often easy to estimate but hard to bound. In this paper, we propose a new differential privacy paradigm called estimate-verify-release (EVR), which tackles the challenges of providing a strict upper bound for the privacy parameter in DP compositions by converting an *estimate* of privacy parameter into a formal guarantee. The EVR paradigm first verifies whether the mechanism meets the *estimated* privacy guarantee, and then releases the query output based on the verification result. The core component of the EVR is privacy verification. We develop a randomized privacy verifier using Monte Carlo (MC) technique. Furthermore, we propose an MC-based DP accountant that outperforms existing DP accounting techniques in terms of accuracy and efficiency. MC-based DP verifier and accountant is applicable to an important and commonly used class of DP algorithms, including the famous DP-SGD. An empirical evaluation shows the proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.

Cite

Text

Wang et al. "A Randomized Approach to Tight Privacy Accounting." Neural Information Processing Systems, 2023.

Markdown

[Wang et al. "A Randomized Approach to Tight Privacy Accounting." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wang2023neurips-randomized/)

BibTeX

@inproceedings{wang2023neurips-randomized,
  title     = {{A Randomized Approach to Tight Privacy Accounting}},
  author    = {Wang, Jiachen and Mahloujifar, Saeed and Wu, Tong and Jia, Ruoxi and Mittal, Prateek},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/wang2023neurips-randomized/}
}