Locally Private Gaussian Estimation

Abstract

We study a basic private estimation problem: each of n users draws a single i.i.d. sample from an unknown Gaussian distribution N(\mu,\sigma^2), and the goal is to estimate \mu while guaranteeing local differential privacy for each user. As minimizing the number of rounds of interaction is important in the local setting, we provide adaptive two-round solutions and nonadaptive one-round solutions to this problem. We match these upper bounds with an information-theoretic lower bound showing that our accuracy guarantees are tight up to logarithmic factors for all sequentially interactive locally private protocols.

Cite

Text

Joseph et al. "Locally Private Gaussian Estimation." Neural Information Processing Systems, 2019.

Markdown

[Joseph et al. "Locally Private Gaussian Estimation." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/joseph2019neurips-locally/)

BibTeX

@inproceedings{joseph2019neurips-locally,
  title     = {{Locally Private Gaussian Estimation}},
  author    = {Joseph, Matthew and Kulkarni, Janardhan and Mao, Jieming and Wu, Steven Z.},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {2984-2993},
  url       = {https://mlanthology.org/neurips/2019/joseph2019neurips-locally/}
}