Differentially Private Model Personalization
Abstract
We study personalization of supervised learning with user-level differential privacy. Consider a setting with many users, each of whom has a training data set drawn from their own distribution $P_i$. Assuming some shared structure among the problems $P_i$, can users collectively learn the shared structure---and solve their tasks better than they could individually---while preserving the privacy of their data? We formulate this question using joint, user-level differential privacy---that is, we control what is leaked about each user's entire data set. We provide algorithms that exploit popular non-private approaches in this domain like the Almost-No-Inner-Loop (ANIL) method, and give strong user-level privacy guarantees for our general approach. When the problems $P_i$ are linear regression problems with each user's regression vector lying in a common, unknown low-dimensional subspace, we show that our efficient algorithms satisfy nearly optimal estimation error guarantees. We also establish a general, information-theoretic upper bound via an exponential mechanism-based algorithm.
Cite
Text
Jain et al. "Differentially Private Model Personalization." Neural Information Processing Systems, 2021.Markdown
[Jain et al. "Differentially Private Model Personalization." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/jain2021neurips-differentially/)BibTeX
@inproceedings{jain2021neurips-differentially,
title = {{Differentially Private Model Personalization}},
author = {Jain, Prateek and Rush, John and Smith, Adam and Song, Shuang and Thakurta, Abhradeep Guha},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/jain2021neurips-differentially/}
}