BiDoRA: Bi-Level Optimization-Based Weight-Decomposed Low-Rank Adaptation

Abstract

Parameter-efficient fine-tuning (PEFT) is a flexible and efficient method for adapting large language models (LLMs) to downstream tasks. Among these methods, weight-decomposed low-rank adaptation (DoRA) is a promising approach that decomposes weight matrices into magnitude and direction components to mimic full fine-tuning (FT) better. However, DoRA's simultaneous optimization of these components makes it over-expressive, increases the risk of overfitting, and creates a coupled updating pattern that limits its learning capacity. To address these issues, we propose Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation (BiDoRA), a novel PEFT method based on a bi-level optimization framework. BiDoRA fundamentally differs from DoRA by optimizing the magnitude and direction in two separate, asynchronous loops using distinct training and validation data splits. This decoupled optimization process effectively mitigates overfitting and allows for more flexible updates that align even more closely with FT. For instance, weight decomposition analysis shows BiDoRA achieves a magnitude-direction update correlation of $-8.042$, significantly closer to the FT ideal compared to $-1.784$ for DoRA. Evaluation of BiDoRA on diverse tasks spanning natural language understanding, generation, token classification, and extremely small biomedical datasets reveals that it consistently outperforms DoRA and a wide range of leading PEFT methods. This improvement is statistically significant, as demonstrated on the GLUE benchmark where BiDoRA surpasses DoRA with a p-value of $2.4\times10^{-4}$ in terms of the Wilcoxon signed-rank test. The code for BiDoRA is available at https://github.com/t2ance/BiDoRA.

Cite

Text

Qin et al. "BiDoRA: Bi-Level Optimization-Based Weight-Decomposed Low-Rank Adaptation." Transactions on Machine Learning Research, 2025.

Markdown

[Qin et al. "BiDoRA: Bi-Level Optimization-Based Weight-Decomposed Low-Rank Adaptation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/qin2025tmlr-bidora/)

BibTeX

@article{qin2025tmlr-bidora,
  title     = {{BiDoRA: Bi-Level Optimization-Based Weight-Decomposed Low-Rank Adaptation}},
  author    = {Qin, Peijia and Zhang, Ruiyi and Xie, Pengtao},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/qin2025tmlr-bidora/}
}