On the Computational Complexity of Private High-Dimensional Model Selection

Abstract

We consider the problem of model selection in a high-dimensional sparse linear regression model under privacy constraints. We propose a differentially private (DP) best subset selection method with strong statistical utility properties by adopting the well-known exponential mechanism for selecting the best model. To achieve computational expediency, we propose an efficient Metropolis-Hastings algorithm and under certain regularity conditions, we establish that it enjoys polynomial mixing time to its stationary distribution. As a result, we also establish both approximate differential privacy and statistical utility for the estimates of the mixed Metropolis-Hastings chain. Finally, we perform some illustrative experiments on simulated data showing that our algorithm can quickly identify active features under reasonable privacy budget constraints.

Cite

Text

Roy et al. "On the Computational Complexity of Private High-Dimensional Model Selection." Neural Information Processing Systems, 2024. doi:10.52202/079017-1156

Markdown

[Roy et al. "On the Computational Complexity of Private High-Dimensional Model Selection." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/roy2024neurips-computational/) doi:10.52202/079017-1156

BibTeX

@inproceedings{roy2024neurips-computational,
  title     = {{On the Computational Complexity of Private High-Dimensional Model Selection}},
  author    = {Roy, Saptarshi and Wang, Zehua and Tewari, Ambuj},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1156},
  url       = {https://mlanthology.org/neurips/2024/roy2024neurips-computational/}
}