Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models
Abstract
We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model’s weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.
Cite
Text
Farhadzadeh et al. "Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Farhadzadeh et al. "Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/farhadzadeh2025icml-zeroshot/)BibTeX
@inproceedings{farhadzadeh2025icml-zeroshot,
title = {{Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models}},
author = {Farhadzadeh, Farzad and Das, Debasmit and Borse, Shubhankar and Porikli, Fatih},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {16144-16160},
volume = {267},
url = {https://mlanthology.org/icml/2025/farhadzadeh2025icml-zeroshot/}
}