Combining Pre-Trained LoRA Modules Improves Few-Shot Adaptation of Foundation Models to New Tasks

Abstract

The efficiency of low-rank adaptation (LoRA) has facilitated the creation and sharing of hundreds of custom LoRA modules for various downstream tasks. In this paper, we explore the composability of LoRA modules, examining if combining these pre-trained modules enhances the generalization of foundation models to unseen downstream tasks. Our investigation involves evaluating two approaches: (a) uniform composition, involving averaging upstream LoRA modules with equal weights, and (b) learned composition, where we learn the weights for each upstream module and perform weighted averaging. Our experimental results on both vision and language models reveal that in few-shot settings, where only a limited number of samples are available for the downstream task, both uniform and learned composition methods result in better transfer accuracy; outperforming full fine-tuning and training a LoRA from scratch. Our research unveils the potential of composition strategies for enhancing the transferability of foundation models in low-shot settings.

Cite

Text

Asadi et al. "Combining Pre-Trained LoRA Modules Improves Few-Shot  Adaptation of Foundation Models to New Tasks." ICML 2024 Workshops: FM-Wild, 2024.

Markdown

[Asadi et al. "Combining Pre-Trained LoRA Modules Improves Few-Shot  Adaptation of Foundation Models to New Tasks." ICML 2024 Workshops: FM-Wild, 2024.](https://mlanthology.org/icmlw/2024/asadi2024icmlw-combining/)

BibTeX

@inproceedings{asadi2024icmlw-combining,
  title     = {{Combining Pre-Trained LoRA Modules Improves Few-Shot  Adaptation of Foundation Models to New Tasks}},
  author    = {Asadi, Nader and Beitollahi, Mahdi and Khalil, Yasser H. and Li, Yinchuan and Zhang, Guojun and Chen, Xi},
  booktitle = {ICML 2024 Workshops: FM-Wild},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/asadi2024icmlw-combining/}
}