Privately Learning High-Dimensional Distributions
Abstract
We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian and learning a product distribution over the Boolean hypercube in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters, showing that privacy comes essentially for free for these problems. In particular, in contrast to previous approaches, our algorithm for learning Gaussians does not require strong a priori bounds on the range of the parameters. Our algorithms introduce a novel technical approach to reducing the sensitivity of the estimation procedure that we call recursive private preconditioning.
Cite
Text
Kamath et al. "Privately Learning High-Dimensional Distributions." Conference on Learning Theory, 2019.Markdown
[Kamath et al. "Privately Learning High-Dimensional Distributions." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/kamath2019colt-privately/)BibTeX
@inproceedings{kamath2019colt-privately,
title = {{Privately Learning High-Dimensional Distributions}},
author = {Kamath, Gautam and Li, Jerry and Singhal, Vikrant and Ullman, Jonathan},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {1853-1902},
volume = {99},
url = {https://mlanthology.org/colt/2019/kamath2019colt-privately/}
}