FasterDiT: Towards Faster Diffusion Transformers Training Without Architecture Modification

Abstract

Diffusion Transformers (DiT) have attracted significant attention in research. However, they suffer from a slow convergence rate. In this paper, we aim to accelerate DiT training without any architectural modification. We identify the following issues in the training process: firstly, certain training strategies do not consistently perform well across different data. Secondly, the effectiveness of supervision at specific timesteps is limited. In response, we propose the following contributions: (1) We introduce a new perspective for interpreting the failure of the strategies. Specifically, we slightly extend the definition of Signal-to-Noise Ratio (SNR) and suggest observing the Probability Density Function (PDF) of SNR to understand the essence of the data robustness of the strategy. (2) We conduct numerous experiments and report over one hundred experimental results to empirically summarize a unified accelerating strategy from the perspective of PDF. (3) We develop a new supervision method that further accelerates the training process of DiT. Based on them, we propose FasterDiT, an exceedingly simple and practicable design strategy. With few lines of code modifications, it achieves 2.30 FID on ImageNet at 256x256 resolution with 1000 iterations, which is comparable to DiT (2.27 FID) but 7 times faster in training.

Cite

Text

Yao et al. "FasterDiT: Towards Faster Diffusion Transformers Training Without Architecture Modification." Neural Information Processing Systems, 2024. doi:10.52202/079017-1787

Markdown

[Yao et al. "FasterDiT: Towards Faster Diffusion Transformers Training Without Architecture Modification." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yao2024neurips-fasterdit/) doi:10.52202/079017-1787

BibTeX

@inproceedings{yao2024neurips-fasterdit,
  title     = {{FasterDiT: Towards Faster Diffusion Transformers Training Without Architecture Modification}},
  author    = {Yao, Jingfeng and Wang, Cheng and Liu, Wenyu and Wang, Xinggang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1787},
  url       = {https://mlanthology.org/neurips/2024/yao2024neurips-fasterdit/}
}