Canonical Noise Distributions and Private Hypothesis Tests
Abstract
In the setting of $f$-DP, we propose the concept \emph{canonical noise distribution} (CND) which captures whether an additive privacy mechanism is tailored for a given $f$, and give a construction of a CND for an arbitrary tradeoff function $f$. We show that private hypothesis tests are intimately related to CNDs, allowing for the release of private $p$-values at no additional privacy cost as well as the construction of uniformly most powerful (UMP) tests for binary data. We apply our techniques to difference of proportions testing.
Cite
Text
Awan and Vadhan. "Canonical Noise Distributions and Private Hypothesis Tests." NeurIPS 2021 Workshops: PRIML, 2021.Markdown
[Awan and Vadhan. "Canonical Noise Distributions and Private Hypothesis Tests." NeurIPS 2021 Workshops: PRIML, 2021.](https://mlanthology.org/neuripsw/2021/awan2021neuripsw-canonical/)BibTeX
@inproceedings{awan2021neuripsw-canonical,
title = {{Canonical Noise Distributions and Private Hypothesis Tests}},
author = {Awan, Jordan and Vadhan, Salil},
booktitle = {NeurIPS 2021 Workshops: PRIML},
year = {2021},
url = {https://mlanthology.org/neuripsw/2021/awan2021neuripsw-canonical/}
}