Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting

Abstract

A learned generative model often produces biased statistics relative to the underlying data distribution. A standard technique to correct this bias is importance sampling, where samples from the model are weighted by the likelihood ratio under model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. We employ this likelihood-free importance weighting method to correct for the bias in generative models. We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art deep generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.

Cite

Text

Grover et al. "Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting." Neural Information Processing Systems, 2019.

Markdown

[Grover et al. "Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/grover2019neurips-bias/)

BibTeX

@inproceedings{grover2019neurips-bias,
  title     = {{Bias Correction of Learned Generative Models Using Likelihood-Free Importance Weighting}},
  author    = {Grover, Aditya and Song, Jiaming and Kapoor, Ashish and Tran, Kenneth and Agarwal, Alekh and Horvitz, Eric J and Ermon, Stefano},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {11058-11070},
  url       = {https://mlanthology.org/neurips/2019/grover2019neurips-bias/}
}