Bayesian Low-Rank Adaptation for Large Language Models

Abstract

Low-rank adaptation (LoRA) has emerged as a new paradigm for cost-efficient fine-tuning of large language models (LLMs). However, fine-tuned LLMs often become overconfident especially when fine-tuned on small datasets. Bayesian methods, with their inherent ability to estimate uncertainty, serve as potent tools to mitigate overconfidence and enhance calibration. In this work, we introduce Laplace-LoRA, which applies a Bayesian approach to the LoRA parameters. Specifically, Laplace-LoRA applies a Laplace approximation to the posterior over the LoRA parameters, considerably improving the calibration of fine-tuned LLMs.

Cite

Text

Yang et al. "Bayesian Low-Rank Adaptation for Large Language Models." NeurIPS 2023 Workshops: SoLaR, 2023.

Markdown

[Yang et al. "Bayesian Low-Rank Adaptation for Large Language Models." NeurIPS 2023 Workshops: SoLaR, 2023.](https://mlanthology.org/neuripsw/2023/yang2023neuripsw-bayesian/)

BibTeX

@inproceedings{yang2023neuripsw-bayesian,
  title     = {{Bayesian Low-Rank Adaptation for Large Language Models}},
  author    = {Yang, Adam and Robeyns, Maxime and Wang, Xi and Aitchison, Laurence},
  booktitle = {NeurIPS 2023 Workshops: SoLaR},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/yang2023neuripsw-bayesian/}
}