Batched Low-Rank Adaptation of Foundation Models
Abstract
Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation models by incorporating trainable low-rank matrices, thereby reducing the number of trainable parameters. While \lora/ offers numerous advantages, its applicability for real-time serving to a diverse and global user base is constrained by its incapability to handle multiple task-specific adapters efficiently. This imposes a performance bottleneck in scenarios requiring personalized, task-specific adaptations for each incoming request. To address this, we introduce FLoRA (Fast LoRA), a framework in which each input example in a minibatch can be associated with its unique low-rank adaptation weights, allowing for efficient batching of heterogeneous requests. We empirically demonstrate that \flora/ retains the performance merits of \lora/, showcasing competitive results on the MultiPL-E code generation benchmark spanning over 8 languages and a multilingual speech recognition task across 6 languages.
Cite
Text
Wen and Chaudhuri. "Batched Low-Rank Adaptation of Foundation Models." International Conference on Learning Representations, 2024.Markdown
[Wen and Chaudhuri. "Batched Low-Rank Adaptation of Foundation Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/wen2024iclr-batched/)BibTeX
@inproceedings{wen2024iclr-batched,
title = {{Batched Low-Rank Adaptation of Foundation Models}},
author = {Wen, Yeming and Chaudhuri, Swarat},
booktitle = {International Conference on Learning Representations},
year = {2024},
url = {https://mlanthology.org/iclr/2024/wen2024iclr-batched/}
}