DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization

ICCV 2025 pp. 18510-18520

Abstract

Diffusion models have achieved remarkable success in image generation but come with significant computational costs, posing challenges for deployment in resource-constrained environments. Recent post-training quantization (PTQ) methods have attempted to mitigate this issue by focusing on the iterative nature of diffusion models. However, these approaches often overlook outliers, leading to degraded performance at low bit-widths. In this paper, we propose a DMQ which combines Learned Equivalent Scaling (LES) and channel-wise Power-of-Two Scaling (PTS) to effectively address these challenges. Learned Equivalent Scaling optimizes channel-wise scaling factors to redistribute quantization difficulty between weights and activations, reducing overall quantization error. Recognizing that early denoising steps, despite having small quantization errors, crucially impact the final output due to error accumulation, we incorporate an adaptive timestep weighting scheme to prioritize these critical steps during learning. Furthermore, identifying that layers such as skip connections exhibit high inter-channel variance, we introduce channel-wise Power-of-Two Scaling for activations. To ensure robust selection of PTS factors even with small calibration set, we introduce a voting algorithm that enhances reliability. Extensive experiments demonstrate that our method significantly outperforms existing works, especially at low bit-widths such as W4A6 (4-bit weight, 6-bit activation) and W4A8, maintaining high image generation quality and model stability. The code is available at https://github.com/LeeDongYeun/dmq.

Cite

Text

Lee et al. "DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization." International Conference on Computer Vision, 2025.

Markdown

[Lee et al. "DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/lee2025iccv-dmq/)

BibTeX

@inproceedings{lee2025iccv-dmq,
  title     = {{DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization}},
  author    = {Lee, Dongyeun and Hur, Jiwan and Shon, Hyounguk and Lee, Jae Young and Kim, Junmo},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {18510-18520},
  url       = {https://mlanthology.org/iccv/2025/lee2025iccv-dmq/}
}