ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

Abstract

Diffusion transformers have demonstrated remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions. However, larger model sizes and multi-frame processing for video generation lead to increased computational and memory costs, posing challenges for practical deployment on edge devices. Post-Training Quantization (PTQ) is an effective method for reducing memory costs and computational complexity. When quantizing diffusion transformers, we find that existing quantization methods face challenges when applied to text-to-image and video tasks. To address these challenges, we begin by systematically analyzing the source of quantization error and conclude with the unique challenges posed by DiT quantization. Accordingly, we design an improved quantization scheme: ViDiT-Q (**V**ideo \& **I**mage **Di**ffusion **T**ransformer **Q**uantization), tailored specifically for DiT models. We validate the effectiveness of ViDiT-Q across a variety of text-to-image and video models, achieving W8A8 and W4A8 with negligible degradation in visual quality and metrics. Additionally, we implement efficient GPU kernels to achieve practical 2-2.5x memory optimization and a 1.4-1.7x end-to-end latency speedup.

Cite

Text

Zhao et al. "ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation." International Conference on Learning Representations, 2025.

Markdown

[Zhao et al. "ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhao2025iclr-viditq/)

BibTeX

@inproceedings{zhao2025iclr-viditq,
  title     = {{ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation}},
  author    = {Zhao, Tianchen and Fang, Tongcheng and Huang, Haofeng and Wan, Rui and Soedarmadji, Widyadewi and Liu, Enshu and Li, Shiyao and Lin, Zinan and Dai, Guohao and Yan, Shengen and Yang, Huazhong and Ning, Xuefei and Wang, Yu},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhao2025iclr-viditq/}
}