Nearly Optimal Private LASSO

Abstract

We present a nearly optimal differentially private version of the well known LASSO estimator. Our algorithm provides privacy protection with respect to each training data item. The excess risk of our algorithm, compared to the non-private version, is $\widetilde{O}(1/n^{2/3})$, assuming all the input data has bounded $\ell_\infty$ norm. This is the first differentially private algorithm that achieves such a bound without the polynomial dependence on $p$ under no addition assumption on the design matrix. In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.

Cite

Text

Talwar et al. "Nearly Optimal Private LASSO." Neural Information Processing Systems, 2015.

Markdown

[Talwar et al. "Nearly Optimal Private LASSO." Neural Information Processing Systems, 2015.](https://mlanthology.org/neurips/2015/talwar2015neurips-nearly/)

BibTeX

@inproceedings{talwar2015neurips-nearly,
  title     = {{Nearly Optimal Private LASSO}},
  author    = {Talwar, Kunal and Thakurta, Abhradeep Guha and Zhang, Li},
  booktitle = {Neural Information Processing Systems},
  year      = {2015},
  pages     = {3025-3033},
  url       = {https://mlanthology.org/neurips/2015/talwar2015neurips-nearly/}
}