RelGAN: Relational Generative Adversarial Networks for Text Generation

Abstract

Generative adversarial networks (GANs) have achieved great success at generating realistic images. However, the text generation still remains a challenging task for modern GAN architectures. In this work, we propose RelGAN, a new GAN architecture for text generation, consisting of three main components: a relational memory based generator for the long-distance dependency modeling, the Gumbel-Softmax relaxation for training GANs on discrete data, and multiple embedded representations in the discriminator to provide a more informative signal for the generator updates. Our experiments show that RelGAN outperforms current state-of-the-art models in terms of sample quality and diversity, and we also reveal via ablation studies that each component of RelGAN contributes critically to its performance improvements. Moreover, a key advantage of our method, that distinguishes it from other GANs, is the ability to control the trade-off between sample quality and diversity via the use of a single adjustable parameter. Finally, RelGAN is the first architecture that makes GANs with Gumbel-Softmax relaxation succeed in generating realistic text.

Cite

Text

Nie et al. "RelGAN: Relational Generative Adversarial Networks for Text Generation." International Conference on Learning Representations, 2019.

Markdown

[Nie et al. "RelGAN: Relational Generative Adversarial Networks for Text Generation." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/nie2019iclr-relgan/)

BibTeX

@inproceedings{nie2019iclr-relgan,
  title     = {{RelGAN: Relational Generative Adversarial Networks for Text Generation}},
  author    = {Nie, Weili and Narodytska, Nina and Patel, Ankit},
  booktitle = {International Conference on Learning Representations},
  year      = {2019},
  url       = {https://mlanthology.org/iclr/2019/nie2019iclr-relgan/}
}