A Knowledge Transfer Framework for Differentially Private Sparse Learning
Abstract
We study the problem of estimating high dimensional models with underlying sparse structures while preserving the privacy of each training example. We develop a differentially private high-dimensional sparse learning framework using the idea of knowledge transfer. More specifically, we propose to distill the knowledge from a “teacher” estimator trained on a private dataset, by creating a new dataset from auxiliary features, and then train a differentially private “student” estimator using this new dataset. In addition, we establish the linear convergence rate as well as the utility guarantee for our proposed method. For sparse linear regression and sparse logistic regression, our method achieves improved utility guarantees compared with the best known results (Kifer, Smith and Thakurta 2012; Wang and Gu 2019). We further demonstrate the superiority of our framework through both synthetic and real-world data experiments.
Cite
Text
Wang and Gu. "A Knowledge Transfer Framework for Differentially Private Sparse Learning." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I04.6090Markdown
[Wang and Gu. "A Knowledge Transfer Framework for Differentially Private Sparse Learning." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/wang2020aaai-knowledge/) doi:10.1609/AAAI.V34I04.6090BibTeX
@inproceedings{wang2020aaai-knowledge,
title = {{A Knowledge Transfer Framework for Differentially Private Sparse Learning}},
author = {Wang, Lingxiao and Gu, Quanquan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2020},
pages = {6235-6242},
doi = {10.1609/AAAI.V34I04.6090},
url = {https://mlanthology.org/aaai/2020/wang2020aaai-knowledge/}
}