On Learning and Refutation in Noninteractive Local Differential Privacy

Abstract

We study two basic statistical tasks in non-interactive local differential privacy (LDP): *learning* and *refutation*: learning requires finding a concept that best fits an unknown target function (from labelled samples drawn from a distribution), whereas refutation requires distinguishing between data distributions that are well-correlated with some concept in the class, versus distributions where the labels are random. Our main result is a complete characterization of the sample complexity of agnostic PAC learning for non-interactive LDP protocols. We show that the optimal sample complexity for any concept class is captured by the approximate $\gamma_2$ norm of a natural matrix associated with the class. Combined with previous work, this gives an *equivalence* between agnostic learning and refutation in the agnostic setting.

Cite

Text

Edmonds et al. "On Learning and Refutation in Noninteractive Local Differential Privacy." Neural Information Processing Systems, 2022.

Markdown

[Edmonds et al. "On Learning and Refutation in Noninteractive Local Differential Privacy." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/edmonds2022neurips-learning/)

BibTeX

@inproceedings{edmonds2022neurips-learning,
  title     = {{On Learning and Refutation in Noninteractive Local Differential Privacy}},
  author    = {Edmonds, Alexander and Nikolov, Aleksandar and Pitassi, Toniann},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/edmonds2022neurips-learning/}
}