Batched Low-Rank Adaptation of Foundation Models

Abstract

Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation models by incorporating trainable low-rank matrices, thereby reducing the number of trainable parameters. While LoRA offers numerous advantages, its applicability for real-time serving to a diverse and global user base is constrained by its incapability to handle multiple task-specific adapters efficiently. This imposes a performance bottleneck in scenarios requiring personalized, task-specific adaptations for each incoming request. To address this, we introduce FLORA (Fast LoRA), a framework in which each input example in a minibatch can be associated with its unique low-rank adaptation weights, allowing for efficient batching of heterogeneous requests. We empirically demonstrate that FLORA retains the performance merits of LoRA, showcasing competitive results on the MultiPL-E code generation benchmark spanning over 6 languages.

Cite

Text

Wen and Chaudhuri. "Batched Low-Rank Adaptation of Foundation Models." NeurIPS 2023 Workshops: WANT, 2023.

Markdown

[Wen and Chaudhuri. "Batched Low-Rank Adaptation of Foundation Models." NeurIPS 2023 Workshops: WANT, 2023.](https://mlanthology.org/neuripsw/2023/wen2023neuripsw-batched/)

BibTeX

@inproceedings{wen2023neuripsw-batched,
  title     = {{Batched Low-Rank Adaptation of Foundation Models}},
  author    = {Wen, Yeming and Chaudhuri, Swarat},
  booktitle = {NeurIPS 2023 Workshops: WANT},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/wen2023neuripsw-batched/}
}