Robust and Efficient Fine-Tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation

Abstract

Large Language Models (LLMs) are highly resource-intensive to fine-tune due to their enormous size. While low-rank adaptation is a prominent parameter-efficient fine-tuning approach, it suffers from sensitivity to hyperparameter choices, leading to instability in model performance on fine-tuning downstream tasks. This paper highlights the importance of effective parameterization in low-rank fine-tuning to reduce estimator variance and enhance the stability of final model outputs. We propose MonteCLoRA, an efficient fine-tuning technique that employs Monte Carlo estimation to learn an unbiased posterior estimation of low-rank parameters with low expected variance, stabilizing fine-tuned LLMs with only $\mathcal{O}(r)$ additional parameters, for a given rank $r$. MonteCLoRA shows significant improvements in accuracy and robustness, achieving up to $3.8$% higher accuracy and $8.6$% greater robustness than existing efficient fine-tuning methods on natural language understanding tasks with pre-trained RoBERTa-base. Furthermore, in generative tasks with pre-trained LLaMA-1-7B and LLaMA-3.2-3B-Instruct, MonteCLoRA demonstrates robust performance with $50\%$ and $62\%$ lower spreads, respectively, than the contemporary, efficient fine-tuning methods. The theoretical and empirical results presented in the paper underscore how parameterization and hyperpriors balance exploration-exploitation in the low-rank parametric space, therefore leading to more optimal and robust parameter estimation during efficient fine-tuning.

Cite

Text

Seth et al. "Robust and Efficient Fine-Tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation." Transactions on Machine Learning Research, 2025.

Markdown

[Seth et al. "Robust and Efficient Fine-Tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/seth2025tmlr-robust/)

BibTeX

@article{seth2025tmlr-robust,
  title     = {{Robust and Efficient Fine-Tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation}},
  author    = {Seth, Vaibhav and Sengupta, Ayan and Pathak, Arinjay and Verma, Aastha A K and Raman, Natraj and Gopalakrishnan, Sriram and Chatterjee, Niladri and Chakraborty, Tanmoy},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/seth2025tmlr-robust/}
}