Priv’IT: Private and Sample Efficient Identity Testing

Abstract

We develop differentially private hypothesis testing methods for the small sample regime. Given a sample $\mathcal{D}$ from a categorical distribution $p$ over some domain $\Sigma$, an explicitly described distribution $q$ over $\Sigma$, some privacy parameter $\epsilon$, accuracy parameter $\alpha$, and requirements $\beta_\mathrm{I}$ and $\beta_\mathrm{II}$ for the type I and type II errors of our test, the goal is to distinguish between $p=q$ and $d_\mathrm{tv}(p,q) \ge \alpha$. We provide theoretical bounds for the sample size $|\mathcal{D}|$ so that our method both satisfies $(\epsilon,0)$-differential privacy, and guarantees $\beta_\mathrm{I}$ and $\beta_\mathrm{II}$ type I and type II errors. We show that differential privacy may come for free in some regimes of parameters, and we always beat the sample complexity resulting from running the $\chi^2$-test with noisy counts, or standard approaches such as repetition for endowing non-private $\chi^2$-style statistics with differential privacy guarantees. We experimentally compare the sample complexity of our method to that of recently proposed methods for private hypothesis testing.

Cite

Text

Cai et al. "Priv’IT: Private and Sample Efficient Identity Testing." International Conference on Machine Learning, 2017.

Markdown

[Cai et al. "Priv’IT: Private and Sample Efficient Identity Testing." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/cai2017icml-privit/)

BibTeX

@inproceedings{cai2017icml-privit,
  title     = {{Priv’IT: Private and Sample Efficient Identity Testing}},
  author    = {Cai, Bryan and Daskalakis, Constantinos and Kamath, Gautam},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {635-644},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/cai2017icml-privit/}
}