Bias and Generalization in Deep Generative Models: An Empirical Study
Abstract
In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets. By measuring properties of the learned distribution, we are able to find interesting patterns of generalization. We verify that these patterns are consistent across datasets, common models and architectures.
Cite
Text
Zhao et al. "Bias and Generalization in Deep Generative Models: An Empirical Study." Neural Information Processing Systems, 2018.Markdown
[Zhao et al. "Bias and Generalization in Deep Generative Models: An Empirical Study." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/zhao2018neurips-bias/)BibTeX
@inproceedings{zhao2018neurips-bias,
title = {{Bias and Generalization in Deep Generative Models: An Empirical Study}},
author = {Zhao, Shengjia and Ren, Hongyu and Yuan, Arianna and Song, Jiaming and Goodman, Noah and Ermon, Stefano},
booktitle = {Neural Information Processing Systems},
year = {2018},
pages = {10792-10801},
url = {https://mlanthology.org/neurips/2018/zhao2018neurips-bias/}
}