BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models

Abstract

Large Language Models (LLMs) often suffer from overconfidence during inference, particularly when adapted to downstream domain-specific tasks with limited data. Previous work addresses this issue by employing approximate Bayesian estimation after the LLMs are trained, enabling them to quantify uncertainty. However, such post-training approaches' performance is severely limited by the parameters learned during training. In this paper, we go beyond post-training Bayesianization and propose Bayesian Low-Rank Adaptation by Backpropagation (BLoB), an algorithm that continuously and jointly adjusts both the mean and covariance of LLM parameters throughout the whole fine-tuning process. Our empirical results verify the effectiveness of BLoB in terms of generalization and uncertainty estimation, when evaluated on both in-distribution and out-of-distribution data.

Cite

Text

Wang et al. "BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models." Neural Information Processing Systems, 2024. doi:10.52202/079017-2164

Markdown

[Wang et al. "BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/wang2024neurips-blob/) doi:10.52202/079017-2164

BibTeX

@inproceedings{wang2024neurips-blob,
  title     = {{BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models}},
  author    = {Wang, Yibin and Shi, Haizhou and Han, Ligong and Metaxas, Dimitris and Wang, Hao},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2164},
  url       = {https://mlanthology.org/neurips/2024/wang2024neurips-blob/}
}