Differentially Private Learning with Margin Guarantees
Abstract
We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees. For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees, as well as an efficient DP learning algorithm with margin guarantees. We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses with shift-invariant kernels, such as Gaussian kernels, and point out how our results can be extended to other kernels using oblivious sketching techniques. We further give a pure DP learning algorithm for a family of feed-forward neural networks for which we prove margin guarantees that are independent of the input dimension. Additionally, we describe a general label DP learning algorithm, which benefits from relative deviation margin bounds and is applicable to a broad family of hypothesis sets, including that of neural networks. Finally, we show how our DP learning algorithms can be augmented in a general way to include model selection, to select the best confidence margin parameter.
Cite
Text
Bassily et al. "Differentially Private Learning with Margin Guarantees." Neural Information Processing Systems, 2022.Markdown
[Bassily et al. "Differentially Private Learning with Margin Guarantees." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/bassily2022neurips-differentially/)BibTeX
@inproceedings{bassily2022neurips-differentially,
title = {{Differentially Private Learning with Margin Guarantees}},
author = {Bassily, Raef and Mohri, Mehryar and Suresh, Ananda Theertha},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/bassily2022neurips-differentially/}
}