On Connecting Stochastic Gradient MCMC and Differential Privacy
Abstract
Concerns related to data security and confidentiality have been raised when applying machine learning to real-world applications. Differential privacy provides a principled and rigorous privacy guarantee for machine learning models. While it is common to inject noise to design a model satisfying a required differential-privacy property, it is generally hard to balance the trade-off between privacy and utility. We show that stochastic gradient Markov chain Monte Carlo (SG-MCMC) – a class of scalable Bayesian posterior sampling algorithms – satisfies strong differential privacy, when carefully chosen stepsizes are employed. We develop theory on the performance of the proposed differentially-private SG-MCMC method. We conduct experiments to support our analysis, and show that a standard SG-MCMC sampler with minor modification can reach state-of-the-art performance in terms of both privacy and utility on Bayesian learning.
Cite
Text
Li et al. "On Connecting Stochastic Gradient MCMC and Differential Privacy." Artificial Intelligence and Statistics, 2019.Markdown
[Li et al. "On Connecting Stochastic Gradient MCMC and Differential Privacy." Artificial Intelligence and Statistics, 2019.](https://mlanthology.org/aistats/2019/li2019aistats-connecting/)BibTeX
@inproceedings{li2019aistats-connecting,
title = {{On Connecting Stochastic Gradient MCMC and Differential Privacy}},
author = {Li, Bai and Chen, Changyou and Liu, Hao and Carin, Lawrence},
booktitle = {Artificial Intelligence and Statistics},
year = {2019},
pages = {557-566},
volume = {89},
url = {https://mlanthology.org/aistats/2019/li2019aistats-connecting/}
}