Sequentially Auditing Differential Privacy
Abstract
We propose a practical sequential test for auditing differential privacy guarantees of black-box mechanisms. The test processes streams of mechanisms' outputs providing anytime-valid inference while controlling Type I error, overcoming the fixed sample size limitation of previous batch auditing methods. Experiments show this test detects violations with sample sizes that are orders of magnitude smaller than existing methods, reducing this number from 50K to a few hundred examples, across diverse realistic mechanisms. Notably, it identifies DP-SGD privacy violations in \textit{under} one training run, unlike prior methods needing full model training.
Cite
Text
González et al. "Sequentially Auditing Differential Privacy." Advances in Neural Information Processing Systems, 2025.Markdown
[González et al. "Sequentially Auditing Differential Privacy." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/gonzalez2025neurips-sequentially/)BibTeX
@inproceedings{gonzalez2025neurips-sequentially,
title = {{Sequentially Auditing Differential Privacy}},
author = {González, Tomás and Rubio, Mateo Dulce and Ramdas, Aaditya and Ribero, Mónica},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/gonzalez2025neurips-sequentially/}
}