Variational Low-Rank Adaptation Using IVON

Abstract

We show that variational learning can significantly improve the accuracy and calibration of Low-Rank Adaptation (LoRA) without a substantial increase in the cost. We replace AdamW by the Improved Variational Online Newton (IVON) algorithm to finetune large language models. For Llama-2 with 7 billion parameters, IVON improves the accuracy over AdamW by 2.8% and expected calibration error by 4.6%. The accuracy is also better than the other Bayesian alternatives, yet the cost is lower and the implementation is easier. Our work provides additional evidence for the effectiveness of IVON for large language models. The code is available at https://github.com/team-approx-bayes/ivon-lora.

Cite

Text

Cong et al. "Variational Low-Rank Adaptation Using IVON." NeurIPS 2024 Workshops: FITML, 2024.

Markdown

[Cong et al. "Variational Low-Rank Adaptation Using IVON." NeurIPS 2024 Workshops: FITML, 2024.](https://mlanthology.org/neuripsw/2024/cong2024neuripsw-variational/)

BibTeX

@inproceedings{cong2024neuripsw-variational,
  title     = {{Variational Low-Rank Adaptation Using IVON}},
  author    = {Cong, Bai and Daheim, Nico and Shen, Yuesong and Cremers, Daniel and Yokota, Rio and Khan, Mohammad Emtiyaz and Möllenhoff, Thomas},
  booktitle = {NeurIPS 2024 Workshops: FITML},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/cong2024neuripsw-variational/}
}