Diffusion-LM Improves Controllable Text Generation

Abstract

Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple sentence attributes (e.g., sentiment), there has been little progress on complex, fine-grained controls (e.g., syntactic structure). To address this challenge, we develop a new non-autoregressive language model based on continuous diffusions that we call Diffusion-LM. Building upon the recent successes of diffusion models in continuous domains, Diffusion-LM iteratively denoises a sequence of Gaussian vectors into word vectors, yielding a sequence of intermediate latent variables. The continuous, hierarchical nature of these intermediate variables enables a simple gradient-based algorithm to perform complex, controllable generation tasks. We demonstrate successful control of Diffusion-LM for six challenging fine-grained control tasks, significantly outperforming prior work.

Cite

Text

Li et al. "Diffusion-LM Improves Controllable Text Generation." Neural Information Processing Systems, 2022.

Markdown

[Li et al. "Diffusion-LM Improves Controllable Text Generation." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/li2022neurips-diffusionlm/)

BibTeX

@inproceedings{li2022neurips-diffusionlm,
  title     = {{Diffusion-LM Improves Controllable Text Generation}},
  author    = {Li, Xiang and Thickstun, John and Gulrajani, Ishaan and Liang, Percy and Hashimoto, Tatsunori B},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/li2022neurips-diffusionlm/}
}