Simple and Effective Masked Diffusion Language Models

Abstract

While diffusion models excel at generating high-quality images, prior work reports a significant performance gap between diffusion and autoregressive (AR) methods on language modeling. In this work, we show that simple masked discrete diffusion is more performant than previously thought. We apply an effective training recipe that improves the performance of masked diffusion models and derive a simplified, Rao-Blackwellized objective that results in additional improvements. Our objective has a simple form—it is a mixture of classical masked language modeling losses—and can be used to train encoder-only language models that admit efficient samplers, including ones that can generate arbitrary lengths of text semi-autoregressively like a traditional language model. On language modeling benchmarks, a range of masked diffusion models trained with modern engineering practices achieves a new state-of-the-art among diffusion models, and approaches AR perplexity.

Cite

Text

Sahoo et al. "Simple and Effective Masked Diffusion Language Models." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Sahoo et al. "Simple and Effective Masked Diffusion Language Models." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/sahoo2024neuripsw-simple/)

BibTeX

@inproceedings{sahoo2024neuripsw-simple,
  title     = {{Simple and Effective Masked Diffusion Language Models}},
  author    = {Sahoo, Subham Sekhar and Arriola, Marianne and Gokaslan, Aaron and Schiff, Yair and Marroquin, Edgar Mariano and Chiu, Justin T and Rush, Alexander M and Kuleshov, Volodymyr},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/sahoo2024neuripsw-simple/}
}