LDMGAN: Reducing Mode Collapse in GANs with Latent Distribution Matching

Abstract

Generative Adversarial Networks (GANs) have shown impressive results in modeling distributions over complicated manifolds such as those of natural images. However, GANs often suffer from mode collapse, which means they are prone to characterize only a single or a few modes of the data distribution. In order to address this problem, we propose a novel framework called LDMGAN. We first introduce Latent Distribution Matching (LDM) constraint which regularizes the generator by aligning distribution of generated samples with that of real samples in latent space. To make use of such latent space, we propose a regularized AutoEncoder (AE) that maps the data distribution to prior distribution in encoded space. Extensive experiments on synthetic data and real world datasets show that our proposed framework significantly improves GAN’s stability and diversity.

Cite

Text

Zuo et al. "LDMGAN: Reducing Mode Collapse in GANs with Latent Distribution Matching." International Conference on Learning Representations, 2020.

Markdown

[Zuo et al. "LDMGAN: Reducing Mode Collapse in GANs with Latent Distribution Matching." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/zuo2020iclr-ldmgan/)

BibTeX

@inproceedings{zuo2020iclr-ldmgan,
  title     = {{LDMGAN: Reducing Mode Collapse in GANs with Latent Distribution Matching}},
  author    = {Zuo, Zhiwen and Zhao, Lei and Zhang, Huiming and Mo, Qihang and Chen, Haibo and Wang, Zhizhong and Li, AiLin and Qiu, Lihong and Xing, Wei and Lu, Dongming},
  booktitle = {International Conference on Learning Representations},
  year      = {2020},
  url       = {https://mlanthology.org/iclr/2020/zuo2020iclr-ldmgan/}
}