Generative Pre-Trained Autoregressive Diffusion Transformer

Abstract

In this work, we present GPDiT, a Generative Pre-trained Autoregressive Diffusion Transformer that unifies the strengths of diffusion and autoregressive modeling for long-range video synthesis, within a continuous latent space. Instead of predicting discrete tokens, GPDiT autoregressively predicts future latent frames using a diffusion loss, enabling natural modeling of motion dynamics and semantic consistency across frames. This continuous autoregressive framework not only enhances generation quality but also endows the model with representation capabilities. Additionally, we introduce a lightweight causal attention variant and a parameter-free rotation-based time-conditioning mechanism, improving both the training and inference efficiency. Extensive experiments demonstrate that GPDiT achieves strong performance in video generation quality, video representation ability, and few-shot learning tasks, highlighting its potential as an effective framework for video modeling in continuous space.

Cite

Text

Zhang et al. "Generative Pre-Trained Autoregressive Diffusion Transformer." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhang et al. "Generative Pre-Trained Autoregressive Diffusion Transformer." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhang2025neurips-generative/)

BibTeX

@inproceedings{zhang2025neurips-generative,
  title     = {{Generative Pre-Trained Autoregressive Diffusion Transformer}},
  author    = {Zhang, Yuan and Jiang, Jiacheng and Ma, Guoqing and Lu, Zhiying and Wang, Bo and Huang, Haoyang and Yuan, Jianlong and Duan, Nan},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhang2025neurips-generative/}
}