RidgeLoRA: Matrix Ridge Enhanced Low-Rank Adaptation of Large Language Models

Abstract

As one of the state-of-the-art parameter-efficient fine-tuning~(PEFT) methods, Low-Rank Adaptation (LoRA) enables model optimization with reduced computational cost through trainable low-rank matrix. However, the low-rank nature makes it prone to produce a decrease in the representation ability, leading to suboptimal performance. In order to break this limitation, we propose RidgeLoRA, a lightweight architecture like LoRA that incorporates novel architecture and matrix ridge enhanced full-rank approximation, to match the performance of full-rank training, while eliminating the need for high memory and a large number of parameters to restore the rank of matrices. We provide a rigorous mathematical derivation to prove that RidgeLoRA has a better upper bound on the representations than vanilla LoRA. Furthermore, extensive experiments across multiple domains demonstrate that RidgeLoRA achieves better performance than other LoRA variants, and can even match or surpass full-rank training.

Cite

Text

Zhu et al. "RidgeLoRA: Matrix Ridge Enhanced Low-Rank Adaptation of Large Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhu et al. "RidgeLoRA: Matrix Ridge Enhanced Low-Rank Adaptation of Large Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhu2025neurips-ridgelora/)

BibTeX

@inproceedings{zhu2025neurips-ridgelora,
  title     = {{RidgeLoRA: Matrix Ridge Enhanced Low-Rank Adaptation of Large Language Models}},
  author    = {Zhu, Junda and Ai, Jun and Li, Yujun and Yin, Yichun and Wang, Yasheng and Shang, Lifeng and Liu, Qun},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhu2025neurips-ridgelora/}
}