Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection
Abstract
In this paper, we aim to understand the generalization properties of generative adversarial networks (GANs) from a new perspective of privacy protection. Theoretically, we prove that a differentially private learning algorithm used for training the GAN does not overfit to a certain degree, i.e., the generalization gap can be bounded. Moreover, some recent works, such as the Bayesian GAN, can be re-interpreted based on our theoretical insight from privacy protection. Quantitatively, to evaluate the information leakage of well-trained GAN models, we perform various membership attacks on these models. The results show that previous Lipschitz regularization techniques are effective in not only reducing the generalization gap but also alleviating the information leakage of the training dataset.
Cite
Text
Wu et al. "Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection." Neural Information Processing Systems, 2019.Markdown
[Wu et al. "Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/wu2019neurips-generalization/)BibTeX
@inproceedings{wu2019neurips-generalization,
title = {{Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection}},
author = {Wu, Bingzhe and Zhao, Shiwan and Chen, Chaochao and Xu, Haoyang and Wang, Li and Zhang, Xiaolu and Sun, Guangyu and Zhou, Jun},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {307-317},
url = {https://mlanthology.org/neurips/2019/wu2019neurips-generalization/}
}