Denoising Autoregressive Transformers for Scalable Text-to-Image Generation

Abstract

Diffusion models have become the dominant approach for visual generation. They are trained by denoising a Markovian process which gradually adds noise to the input. We argue that the Markovian property limits the model’s ability to fully utilize the generation trajectory, leading to inefficiencies during training and inference. In this paper, we propose DART, a transformer-based model that unifies autoregressive (AR) and diffusion within a non-Markovian framework. DART iteratively denoises image patches spatially and spectrally using an AR model that has the same architecture as standard language models. DART does not rely on image quantization, which enables more effective image modeling while maintaining flexibility. Furthermore, DART seamlessly trains with both text and image data in a unified model. Our approach demonstrates competitive performance on class-conditioned and text-to-image generation tasks, offering a scalable, efficient alternative to traditional diffusion models. Through this unified framework, DART sets a new benchmark for scalable, high-quality image synthesis.

Cite

Text

Gu et al. "Denoising Autoregressive Transformers for Scalable Text-to-Image Generation." International Conference on Learning Representations, 2025.

Markdown

[Gu et al. "Denoising Autoregressive Transformers for Scalable Text-to-Image Generation." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/gu2025iclr-denoising/)

BibTeX

@inproceedings{gu2025iclr-denoising,
  title     = {{Denoising Autoregressive Transformers for Scalable Text-to-Image Generation}},
  author    = {Gu, Jiatao and Wang, Yuyang and Zhang, Yizhe and Zhang, Qihang and Zhang, Dinghuai and Jaitly, Navdeep and Susskind, Joshua M. and Zhai, Shuangfei},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/gu2025iclr-denoising/}
}