A Private and Computationally-Efficient Estimator for Unbounded Gaussians
Abstract
We give the first polynomial-time, polynomial-sample, differentially private estimator for the mean and covariance of an arbitrary Gaussian distribution $N(\mu,\Sigma)$ in $\R^d$. All previous estimators are either nonconstructive, with unbounded running time, or require the user to specify a priori bounds on the parameters $\mu$ and $\Sigma$. The primary new technical tool in our algorithm is a new differentially private preconditioner that takes samples from an arbitrary Gaussian $N(0,\Sigma)$ and returns a matrix $A$ such that $A \Sigma A^T$ has constant condition number
Cite
Text
Kamath et al. "A Private and Computationally-Efficient Estimator for Unbounded Gaussians." Conference on Learning Theory, 2022.Markdown
[Kamath et al. "A Private and Computationally-Efficient Estimator for Unbounded Gaussians." Conference on Learning Theory, 2022.](https://mlanthology.org/colt/2022/kamath2022colt-private/)BibTeX
@inproceedings{kamath2022colt-private,
title = {{A Private and Computationally-Efficient Estimator for Unbounded Gaussians}},
author = {Kamath, Gautam and Mouzakis, Argyris and Singhal, Vikrant and Steinke, Thomas and Ullman, Jonathan},
booktitle = {Conference on Learning Theory},
year = {2022},
pages = {544-572},
volume = {178},
url = {https://mlanthology.org/colt/2022/kamath2022colt-private/}
}