Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning
Abstract
We consider the differentially private sparse learning problem, where the goal is to estimate the underlying sparse parameter vector of a statistical model in the high-dimensional regime while preserving the privacy of each training example. We propose a generic differentially private iterative gradient hard threshoding algorithm with a linear convergence rate and strong utility guarantee. We demonstrate the superiority of our algorithm through two specific applications: sparse linear regression and sparse logistic regression. Specifically, for sparse linear regression, our algorithm can achieve the best known utility guarantee without any extra support selection procedure used in previous work \cite{kifer2012private}. For sparse logistic regression, our algorithm can obtain the utility guarantee with a logarithmic dependence on the problem dimension. Experiments on both synthetic data and real world datasets verify the effectiveness of our proposed algorithm.
Cite
Text
Wang and Gu. "Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/519Markdown
[Wang and Gu. "Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/wang2019ijcai-differentially/) doi:10.24963/IJCAI.2019/519BibTeX
@inproceedings{wang2019ijcai-differentially,
title = {{Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning}},
author = {Wang, Lingxiao and Gu, Quanquan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {3740-3747},
doi = {10.24963/IJCAI.2019/519},
url = {https://mlanthology.org/ijcai/2019/wang2019ijcai-differentially/}
}