Perturb-and-Project: Differentially Private Similarities and Marginals

Abstract

We revisit the objective perturbations framework for differential privacy where noise is added to the input $A\in \mathcal{S}$ and the result is then projected back to the space of admissible datasets $\mathcal{S}$. Through this framework, we first design novel efficient algorithms to privately release pair-wise cosine similarities. Second, we derive a novel algorithm to compute $k$-way marginal queries over $n$ features. Prior work could achieve comparable guarantees only for $k$ even. Furthermore, we extend our results to $t$-sparse datasets, where our efficient algorithms yields novel, stronger guarantees whenever $t\le n^{5/6}/\log n.$ Finally, we provide a theoretical perspective on why fast input perturbation algorithms works well in practice. The key technical ingredients behind our results are tight sum-of-squares certificates upper bounding the Gaussian complexity of sets of solutions.

Cite

Text

Cohen-Addad et al. "Perturb-and-Project: Differentially Private Similarities and Marginals." International Conference on Machine Learning, 2024.

Markdown

[Cohen-Addad et al. "Perturb-and-Project: Differentially Private Similarities and Marginals." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cohenaddad2024icml-perturbandproject/)

BibTeX

@inproceedings{cohenaddad2024icml-perturbandproject,
  title     = {{Perturb-and-Project: Differentially Private Similarities and Marginals}},
  author    = {Cohen-Addad, Vincent and D’Orsi, Tommaso and Epasto, Alessandro and Mirrokni, Vahab and Zhong, Peilin},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {9161-9179},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/cohenaddad2024icml-perturbandproject/}
}