Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions

ICCV 2025 pp. 17401-17410

Abstract

Contemporary diffusion models built upon U-Net or Diffusion Transformer (DiT) architectures have revolutionized image generation through transformer-based attention mechanisms. The prevailing paradigm has commonly employed self-attention with quadratic computational complexity to handle global spatial relationships in complex images, thereby synthesizing high-fidelity images with coherent visual semantics. Contrary to conventional wisdom, our systematic layer-wise analysis reveals an interesting discrepancy: self-attention in pre-trained diffusion models predominantly exhibits localized attention patterns, closely resembling convolutional inductive biases. While the global interactions in self-attention is smooth and low-intensity and may be less critical than commonly assumed. Driven by this, we propose (\Delta)ConvFusion to replace conventional self-attention modules with Pyramid Convolution Blocks ((\Delta)ConvBlocks). By distilling attention patterns into localized convolutional operations while keeping other components frozen, (\Delta)ConvFusion achieves performance comparable to transformer-based counterparts while reducing computational cost by 6929x and surpassing LinFusion by 5.42x in efficiency--all without compromising generative fidelity.

Cite

Text

Dong et al. "Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions." International Conference on Computer Vision, 2025.

Markdown

[Dong et al. "Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/dong2025iccv-we/)

BibTeX

@inproceedings{dong2025iccv-we,
  title     = {{Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions}},
  author    = {Dong, Ziyi and Zhou, Chengxing and Deng, Weijian and Wei, Pengxu and Ji, Xiangyang and Lin, Liang},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {17401-17410},
  url       = {https://mlanthology.org/iccv/2025/dong2025iccv-we/}
}