On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
Abstract
We revisit the fundamental problem of learning Axis-Aligned-Rectangles over a finite grid $X^d\subseteq\mathbb{R}^d$ with differential privacy. Existing results show that the sample complexity of this problem is at most $\min\left\{ d{\cdot}\log|X| \;,\; d^{1.5}{\cdot}\left(\log^*|X| \right)^{1.5}\right\}$. That is, existing constructions either require sample complexity that grows linearly with $\log|X|$, or else it grows super linearly with the dimension $d$. We present a novel algorithm that reduces the sample complexity to only $\tilde{O}\left\{d{\cdot}\left(\log^*|X|\right)^{1.5}\right\}$, attaining a dimensionality optimal dependency without requiring the sample complexity to grow with $\log|X|$. The technique used in order to attain this improvement involves the deletion of "exposed" data-points on the go, in a fashion designed to avoid the cost of the adaptive composition theorems.The core of this technique may be of individual interest, introducing a new method for constructing statistically-efficient private algorithms.
Cite
Text
Sadigurschi and Stemmer. "On the Sample Complexity of Privately Learning Axis-Aligned Rectangles." Neural Information Processing Systems, 2021.Markdown
[Sadigurschi and Stemmer. "On the Sample Complexity of Privately Learning Axis-Aligned Rectangles." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/sadigurschi2021neurips-sample/)BibTeX
@inproceedings{sadigurschi2021neurips-sample,
title = {{On the Sample Complexity of Privately Learning Axis-Aligned Rectangles}},
author = {Sadigurschi, Menachem and Stemmer, Uri},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/sadigurschi2021neurips-sample/}
}