DiTFastAttn: Attention Compression for Diffusion Transformer Models
Abstract
Diffusion Transformers (DiT) excel at image and video generation but face computational challenges due to the quadratic complexity of self-attention operators. We propose DiTFastAttn, a post-training compression method to alleviate the computational bottleneck of DiT.We identify three key redundancies in the attention computation during DiT inference: (1) spatial redundancy, where many attention heads focus on local information; (2) temporal redundancy, with high similarity between the attention outputs of neighboring steps; (3) conditional redundancy, where conditional and unconditional inferences exhibit significant similarity. We propose three techniques to reduce these redundancies: (1) $\textit{Window Attention with Residual Sharing}$ to reduce spatial redundancy; (2) $\textit{Attention Sharing across Timesteps}$ to exploit the similarity between steps; (3) $\textit{Attention Sharing across CFG}$ to skip redundant computations during conditional generation.
Cite
Text
Yuan et al. "DiTFastAttn: Attention Compression for Diffusion Transformer Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-0037Markdown
[Yuan et al. "DiTFastAttn: Attention Compression for Diffusion Transformer Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yuan2024neurips-ditfastattn/) doi:10.52202/079017-0037BibTeX
@inproceedings{yuan2024neurips-ditfastattn,
title = {{DiTFastAttn: Attention Compression for Diffusion Transformer Models}},
author = {Yuan, Zhihang and Zhang, Hanling and Lu, Pu and Ning, Xuefei and Zhang, Linfeng and Zhao, Tianchen and Yan, Shengen and Dai, Guohao and Wang, Yu},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0037},
url = {https://mlanthology.org/neurips/2024/yuan2024neurips-ditfastattn/}
}