Optimality of Matrix Mechanism on $\ell_p^p$-Metric

Abstract

In this paper, we introduce the $\ell_p^p$-error metric (for $p \geq 2$) when answering linear queries under the constraint of differential privacy. We characterize such an error under $(\epsilon,\delta)$-differential privacy in the natural add/remove model. Before this paper, tight characterization in the hardness of privately answering linear queries was known under $\ell_2^2$-error metric (Edmonds et al. 2020) and $\ell_p^2$-error metric for unbiased mechanisms in the substitution model (Nikolov et al. 2024). As a direct consequence of our results, we give tight bounds on answering prefix sum and parity queries under differential privacy for all constant $p$ in terms of the $\ell_p^p$ error, generalizing the bounds in Hhenzinger et al. for $p=2$.

Cite

Text

Zou et al. "Optimality of Matrix Mechanism on $\ell_p^p$-Metric." International Conference on Learning Representations, 2025.

Markdown

[Zou et al. "Optimality of Matrix Mechanism on $\ell_p^p$-Metric." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zou2025iclr-optimality/)

BibTeX

@inproceedings{zou2025iclr-optimality,
  title     = {{Optimality of Matrix Mechanism on $\ell_p^p$-Metric}},
  author    = {Zou, Zongrui and Liu, Jingcheng and Upadhyay, Jalaj},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zou2025iclr-optimality/}
}