TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation

Abstract

Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size --- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity.

Cite

Text

Lin et al. "TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation." Neural Information Processing Systems, 2020.

Markdown

[Lin et al. "TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/lin2020neurips-taylorgan/)

BibTeX

@inproceedings{lin2020neurips-taylorgan,
  title     = {{TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation}},
  author    = {Lin, Chun-Hsing and Wu, Siang-Ruei and Lee, Hung-yi and Chen, Yun-Nung},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/lin2020neurips-taylorgan/}
}