Dynamic Diffusion Transformer

Abstract

Diffusion Transformer (DiT), an emerging diffusion model for image generation, has demonstrated superior performance but suffers from substantial computational costs. Our investigations reveal that these costs stem from the static inference paradigm, which inevitably introduces redundant computation in certain diffusion timesteps and spatial regions. To address this inefficiency, we propose Dynamic Diffusion Transformer (DyDiT), an architecture that dynamically adjusts its compu- tation along both timestep and spatial dimensions during generation. Specifically, we introduce a Timestep-wise Dynamic Width (TDW) approach that adapts model width conditioned on the generation timesteps. In addition, we design a Spatial- wise Dynamic Token (SDT) strategy to avoid redundant computation at unnecessary spatial locations. Extensive experiments on various datasets and different-sized models verify the superiority of DyDiT. Notably, with <3% additional fine-tuning it- erations, our method reduces the FLOPs of DiT-XL by 51%, accelerates generation by 1.73×, and achieves a competitive FID score of 2.07 on ImageNet.

Cite

Text

Zhao et al. "Dynamic Diffusion Transformer." International Conference on Learning Representations, 2025.

Markdown

[Zhao et al. "Dynamic Diffusion Transformer." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhao2025iclr-dynamic/)

BibTeX

@inproceedings{zhao2025iclr-dynamic,
  title     = {{Dynamic Diffusion Transformer}},
  author    = {Zhao, Wangbo and Han, Yizeng and Tang, Jiasheng and Wang, Kai and Song, Yibing and Huang, Gao and Wang, Fan and You, Yang},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhao2025iclr-dynamic/}
}