LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Abstract
Low-rank adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning of foundation models. Despite its computational efficiency, LoRA still yields inferior performance compared to full fine-tuning. In this paper, we first uncover a fundamental connection between the optimization processes of LoRA and full fine-tuning: using LoRA for optimization is mathematically equivalent to full fine-tuning using a low-rank gradient for parameter updates. And this low-rank gradient can be expressed in terms of the gradients of the two low-rank matrices in LoRA. Leveraging this insight, we introduce LoRA-Pro, a method that enhances LoRA's performance by strategically adjusting the gradients of these low-rank matrices. This adjustment allows the low-rank gradient to more accurately approximate the full fine-tuning gradient, thereby narrowing the performance gap between LoRA and full fine-tuning. Furthermore, we theoretically derive the optimal solutions for adjusting the gradients of the low-rank matrices, applying them during fine-tuning in LoRA-Pro. We conduct extensive experiments across natural language understanding, dialogue generation, mathematical reasoning, code generation, and image classification tasks, demonstrating that LoRA-Pro substantially improves LoRA's performance, effectively narrowing the gap with full fine-tuning. Our code is publicly available at https://github.com/mrflogs/LoRA-Pro.
Cite
Text
Wang et al. "LoRA-Pro: Are Low-Rank Adapters Properly Optimized?." International Conference on Learning Representations, 2025.Markdown
[Wang et al. "LoRA-Pro: Are Low-Rank Adapters Properly Optimized?." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-lorapro/)BibTeX
@inproceedings{wang2025iclr-lorapro,
title = {{LoRA-Pro: Are Low-Rank Adapters Properly Optimized?}},
author = {Wang, Zhengbo and Liang, Jian and He, Ran and Wang, Zilei and Tan, Tieniu},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/wang2025iclr-lorapro/}
}