Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models

Abstract

In this study, we consider the transferability of LoRA adapters across quantized foundation models. Specifically, we investigate whether LoRA adapters trained on a low-bit-width foundation model can still perform effectively when merged into a higher-bit-width foundation model. By leveraging this transferability, it becomes possible to construct models with performance comparable to conventional LoRA using QLoRA adapters trained under resource-constrained conditions. This approach not only improves the performance of trained QLoRA models without additional training but also accelerates LoRA fine-tuning.

Cite

Text

Kanda and Hatano. "Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models." ICLR 2025 Workshops: SLLM, 2025.

Markdown

[Kanda and Hatano. "Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models." ICLR 2025 Workshops: SLLM, 2025.](https://mlanthology.org/iclrw/2025/kanda2025iclrw-postlora/)

BibTeX

@inproceedings{kanda2025iclrw-postlora,
  title     = {{Post-LoRA Restoration: Utilizing Transferability of Low-Rank Adapter in Quantized Foundation Models}},
  author    = {Kanda, Yuto and Hatano, Kenji},
  booktitle = {ICLR 2025 Workshops: SLLM},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/kanda2025iclrw-postlora/}
}