IntLoRA: Integral Low-Rank Adaptation of Quantized Diffusion Models

Abstract

Fine-tuning pre-trained diffusion models under limited budgets has gained great success. In particular, the recent advances that directly fine-tune the quantized weights using Low-rank Adaptation (LoRA) further reduces training costs. Despite these progress, we point out that existing adaptation recipes are not inference-efficient. Specifically, additional post-training quantization (PTQ) on tuned weights is needed during deployment, which results in noticeable performance drop when the bit-width is low. Based on this observation, we introduce IntLoRA, which adapts quantized diffusion models with integer-type low-rank parameters, to include inference efficiency during tuning. Specifically, IntLoRA enables pre-trained weights to remain quantized during training, facilitating fine-tuning on consumer-level GPUs. During inference, IntLoRA weights can be seamlessly merged into pre-trained weights to directly obtain quantized downstream weights without PTQ. Extensive experiments show our IntLoRA achieves significant speedup on both training and inference without losing performance.

Cite

Text

Guo et al. "IntLoRA: Integral Low-Rank Adaptation of Quantized Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Guo et al. "IntLoRA: Integral Low-Rank Adaptation of Quantized Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/guo2025icml-intlora/)

BibTeX

@inproceedings{guo2025icml-intlora,
  title     = {{IntLoRA: Integral Low-Rank Adaptation of Quantized Diffusion Models}},
  author    = {Guo, Hang and Li, Yawei and Dai, Tao and Xia, Shu-Tao and Benini, Luca},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {20858-20879},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/guo2025icml-intlora/}
}