SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models
Abstract
Low-rank adaptation (LoRA) and its variants are widely employed in fine-tuning large models, including large language models for natural language processing and diffusion models for computer vision. This paper proposes a generalized framework called SuperLoRA that unifies and extends different LoRA variants, which can be realized under different hyper-parameter settings. Introducing new options with grouping, folding, shuffling, projection, and tensor decomposition, SuperLoRA offers high flexibility and demonstrates superior performance, with up to 10-fold gain in parameter efficiency for transfer learning tasks.
Cite
Text
Chen et al. "SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00804Markdown
[Chen et al. "SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/chen2024cvprw-superlora/) doi:10.1109/CVPRW63382.2024.00804BibTeX
@inproceedings{chen2024cvprw-superlora,
title = {{SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models}},
author = {Chen, Xiangyu and Liu, Jing and Wang, Ye and Wang, Pu Perry and Brand, Matthew and Wang, Guanghui and Koike-Akino, Toshiaki},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2024},
pages = {8050-8055},
doi = {10.1109/CVPRW63382.2024.00804},
url = {https://mlanthology.org/cvprw/2024/chen2024cvprw-superlora/}
}