Differentially Private Sampling from Distributions

Abstract

We initiate an investigation of private sampling from distributions. Given a dataset with $n$ independent observations from an unknown distribution $P$, a sampling algorithm must output a single observation from a distribution that is close in total variation distance to $P$ while satisfying differential privacy. Sampling abstracts the goal of generating small amounts of realistic-looking data. We provide tight upper and lower bounds for the dataset size needed for this task for three natural families of distributions: arbitrary distributions on $\{1,\ldots ,k\}$, arbitrary product distributions on $\{0,1\}^d$, and product distributions on on $\{0,1\}^d$ with bias in each coordinate bounded away from 0 and 1. We demonstrate that, in some parameter regimes, private sampling requires asymptotically fewer observations than learning a description of $P$ nonprivately; in other regimes, however, private sampling proves to be as difficult as private learning. Notably, for some classes of distributions, the overhead in the number of observations needed for private learning compared to non-private learning is completely captured by the number of observations needed for private sampling.

Cite

Text

Raskhodnikova et al. "Differentially Private Sampling from Distributions." Neural Information Processing Systems, 2021.

Markdown

[Raskhodnikova et al. "Differentially Private Sampling from Distributions." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/raskhodnikova2021neurips-differentially/)

BibTeX

@inproceedings{raskhodnikova2021neurips-differentially,
  title     = {{Differentially Private Sampling from Distributions}},
  author    = {Raskhodnikova, Sofya and Sivakumar, Satchit and Smith, Adam and Swanberg, Marika},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/raskhodnikova2021neurips-differentially/}
}