Anonymized Histograms in Intermediate Privacy Models
Abstract
We study the problem of privately computing the $\mbox{\it anonymized histogram}$ (a.k.a. $\mbox{\it unattributed histogram}$), which is defined as the histogram without item labels. Previous works have provided algorithms with $\ell_1$- and $\ell_2^2$-errors of $O_\varepsilon(\sqrt{n})$ in the central model of differential privacy (DP).In this work, we provide an algorithm with a nearly matching error guarantee of $\widetilde{O}_\varepsilon(\sqrt{n})$ in the shuffle DP and pan-private models. Our algorithm is very simple: it just post-processes the discrete Laplace-noised histogram! Using this algorithm as a subroutine, we show applications in privately estimating symmetric properties of distributions such as entropy, support coverage, and support size.
Cite
Text
Ghazi et al. "Anonymized Histograms in Intermediate Privacy Models." Neural Information Processing Systems, 2022.Markdown
[Ghazi et al. "Anonymized Histograms in Intermediate Privacy Models." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/ghazi2022neurips-anonymized/)BibTeX
@inproceedings{ghazi2022neurips-anonymized,
title = {{Anonymized Histograms in Intermediate Privacy Models}},
author = {Ghazi, Badih and Kamath, Pritish and Kumar, Ravi and Manurangsi, Pasin},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/ghazi2022neurips-anonymized/}
}