Principal Component Analysis in the Stochastic Differential Privacy Model

Abstract

In this paper, we study the differentially private Principal Component Analysis (PCA) problem in stochastic optimization settings. We first propose a new stochastic gradient perturbation PCA mechanism (DP-SPCA) for the calculation of the right singular subspace to achieve $(\epsilon,\delta)$-differential privacy. For achieving a better utility guarantee and performance, we then present a new differential privacy stochastic variance reduction mechanism (DP-VRPCA) with gradient perturbation for PCA. To the best of our knowledge, this is the first work of stochastic gradient perturbation for $(\epsilon,\delta)$-differentially private PCA. We also compare the proposed algorithms with existing state-of-the-art methods, and experiments on real-world datasets and on classification tasks confirm the improved theoretical guarantees of our algorithms.

Cite

Text

Shang et al. "Principal Component Analysis in the Stochastic Differential Privacy Model." Uncertainty in Artificial Intelligence, 2021.

Markdown

[Shang et al. "Principal Component Analysis in the Stochastic Differential Privacy Model." Uncertainty in Artificial Intelligence, 2021.](https://mlanthology.org/uai/2021/shang2021uai-principal/)

BibTeX

@inproceedings{shang2021uai-principal,
  title     = {{Principal Component Analysis in the Stochastic Differential Privacy Model}},
  author    = {Shang, Fanhua and Zhang, Zhihui and Xu, Tao and Liu, Yuanyuan and Liu, Hongying},
  booktitle = {Uncertainty in Artificial Intelligence},
  year      = {2021},
  pages     = {1110-1119},
  volume    = {161},
  url       = {https://mlanthology.org/uai/2021/shang2021uai-principal/}
}