A Pretty Fast Algorithm for Adaptive Private Mean Estimation
Abstract
We design an $(\varepsilon, \delta)$-differentially private algorithm to estimate the mean of a $d$-variate distribution, with unknown covariance $\Sigma$, that is adaptive to $\Sigma$. To within polylogarithmic factors, the estimator achieves optimal rates of convergence with respect to the induced Mahalanobis norm $||\cdot||_\Sigma$, takes time $\tilde{O}(n d^2)$ to compute, has near linear sample complexity for sub-Gaussian distributions, allows $\Sigma$ to be degenerate or low rank, and adaptively extends beyond sub-Gaussianity. Prior to this work, other methods required exponential computation time or the superlinear scaling $n = \Omega(d^{3/2})$ to achieve non-trivial error with respect to the norm $||\cdot||_\Sigma$.
Cite
Text
Kuditipudi et al. "A Pretty Fast Algorithm for Adaptive Private Mean Estimation." Conference on Learning Theory, 2023.Markdown
[Kuditipudi et al. "A Pretty Fast Algorithm for Adaptive Private Mean Estimation." Conference on Learning Theory, 2023.](https://mlanthology.org/colt/2023/kuditipudi2023colt-pretty/)BibTeX
@inproceedings{kuditipudi2023colt-pretty,
title = {{A Pretty Fast Algorithm for Adaptive Private Mean Estimation}},
author = {Kuditipudi, Rohith and Duchi, John and Haque, Saminul},
booktitle = {Conference on Learning Theory},
year = {2023},
pages = {2511-2551},
volume = {195},
url = {https://mlanthology.org/colt/2023/kuditipudi2023colt-pretty/}
}