LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers
Abstract
Diffusion Transformers have emerged as the preeminent models for a wide array of generative tasks, demonstrating superior performance and efficacy across various applications. The promising results come at the cost of slow inference, as each denoising step requires running the whole transformer model with a large amount of parameters. In this paper, we show that performing the full computation of the model at each diffusion step is unnecessary, as some computations can be skipped by lazily reusing the results of previous steps. Furthermore, we show that the lower bound of similarity between outputs at consecutive steps is notably high, and this similarity can be linearly approximated using the inputs. To verify our demonstrations, we propose the **LazyDiT**, a lazy learning framework that efficiently leverages cached results from earlier steps to skip redundant computations. Specifically, we incorporate lazy learning layers into the model, effectively trained to maximize laziness, enabling dynamic skipping of redundant computations. Experimental results show that LazyDiT outperforms the DDIM sampler across multiple diffusion transformer models at various resolutions. Furthermore, we implement our method on mobile devices, achieving better performance than DDIM with similar latency.
Cite
Text
Shen et al. "LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I19.34248Markdown
[Shen et al. "LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/shen2025aaai-lazydit/) doi:10.1609/AAAI.V39I19.34248BibTeX
@inproceedings{shen2025aaai-lazydit,
title = {{LazyDiT: Lazy Learning for the Acceleration of Diffusion Transformers}},
author = {Shen, Xuan and Song, Zhao and Zhou, Yufa and Chen, Bo and Li, Yanyu and Gong, Yifan and Zhang, Kai and Tan, Hao and Kuen, Jason and Ding, Henghui and Shu, Zhihao and Niu, Wei and Zhao, Pu and Wang, Yanzhi and Gu, Jiuxiang},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {20409-20417},
doi = {10.1609/AAAI.V39I19.34248},
url = {https://mlanthology.org/aaai/2025/shen2025aaai-lazydit/}
}