Functional Renyi Differential Privacy for Generative Modeling
Abstract
Recently, R\'enyi differential privacy (RDP) becomes an alternative to the ordinary differential privacy (DP) notion, for its convenient compositional rules and flexibility. However, existing mechanisms with RDP guarantees are based on randomizing a fixed, finite-dimensional vector output. In this work, following Hall et al. (2013) we further extend RDP to functional outputs, where the output space can be infinite-dimensional, and develop all necessary tools, e.g. (subsampled) Gaussian mechanism, composition, and post-processing rules, to facilitate its practical adoption. As an illustration, we apply functional RDP (f-RDP) to functions in the reproducing kernel Hilbert space (RKHS) to develop a differentially private generative model (DPGM), where training can be interpreted as releasing loss functions (in an RKHS) with RDP guarantees. Empirically, the new training paradigm achieves a significant improvement in privacy-utility trade-off compared to existing alternatives when $\epsilon=0.2$.
Cite
Text
Jiang et al. "Functional Renyi Differential Privacy for Generative Modeling." ICML 2023 Workshops: DeployableGenerativeAI, 2023.Markdown
[Jiang et al. "Functional Renyi Differential Privacy for Generative Modeling." ICML 2023 Workshops: DeployableGenerativeAI, 2023.](https://mlanthology.org/icmlw/2023/jiang2023icmlw-functional/)BibTeX
@inproceedings{jiang2023icmlw-functional,
title = {{Functional Renyi Differential Privacy for Generative Modeling}},
author = {Jiang, Dihong and Sun, Sun and Yu, Yaoliang},
booktitle = {ICML 2023 Workshops: DeployableGenerativeAI},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/jiang2023icmlw-functional/}
}