Improving Generative Moment Matching Networks with Distribution Partition

Abstract

Generative moment matching networks (GMMN) present a theoretically sound approach to learning deep generative mod-els. However, such methods are typically limited by the high sample complexity, thereby impractical in generating complex data. In this paper, we present a new strategy to train GMMN with a low sample complexity while retaining the theoretical soundness. Our method introduces some auxiliary variables, whose values are provided by a pre-trained model such as an encoder network in practice. Conditioned on these variables, we partition the distribution into a set of conditional distributions, which can be effectively matched with a low sample complexity. We instantiate this strategy by presenting an amortized network called GMMN-DP with shared auxiliary variable information for the data generation task, as well as developing an efficient stochastic training algorithm.The experimental results show that GMMN-DP can generate complex samples on datasets such as CelebA and CIFAR-10, where the vanilla GMMN fails.

Cite

Text

Ren et al. "Improving Generative Moment Matching Networks with Distribution Partition." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I11.17133

Markdown

[Ren et al. "Improving Generative Moment Matching Networks with Distribution Partition." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/ren2021aaai-improving/) doi:10.1609/AAAI.V35I11.17133

BibTeX

@inproceedings{ren2021aaai-improving,
  title     = {{Improving Generative Moment Matching Networks with Distribution Partition}},
  author    = {Ren, Yong and Luo, Yucen and Zhu, Jun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {9403-9410},
  doi       = {10.1609/AAAI.V35I11.17133},
  url       = {https://mlanthology.org/aaai/2021/ren2021aaai-improving/}
}