Differentially Private Covariance Revisited
Abstract
In this paper, we present two new algorithms for covariance estimation under concentrated differential privacy (zCDP). The first algorithm achieves a Frobenius error of $\tilde{O}(d^{1/4}\sqrt{\mathrm{tr}}/\sqrt{n} + \sqrt{d}/n)$, where $\mathrm{tr}$ is the trace of the covariance matrix. By taking $\mathrm{tr}=1$, this also implies a worst-case error bound of $\tilde{O}(d^{1/4}/\sqrt{n})$, which improves the standard Gaussian mechanism's $\tilde{O}(d/n)$ for the regime $d>\widetilde{\Omega}(n^{2/3})$. Our second algorithm offers a tail-sensitive bound that could be much better on skewed data. The corresponding algorithms are also simple and efficient. Experimental results show that they offer significant improvements over prior work.
Cite
Text
Dong et al. "Differentially Private Covariance Revisited." Neural Information Processing Systems, 2022.Markdown
[Dong et al. "Differentially Private Covariance Revisited." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/dong2022neurips-differentially/)BibTeX
@inproceedings{dong2022neurips-differentially,
title = {{Differentially Private Covariance Revisited}},
author = {Dong, Wei and Liang, Yuting and Yi, Ke},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/dong2022neurips-differentially/}
}