RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Abstract
We propose RoCoFT, a parameter-efficient fine-tuning method for large language models based on updating only a few rows and columns of the weight matrices in transformers. Through extensive experiments with medium size LMs like BERT and RoBERTa, and larger LMs like Bloom-7B, Llama2-7B and Llama2-13B, we show that our method gives comparable or better accuracies than state-of-the-art PEFT methods while also being more memory and computationally-efficient. We also study the reason behind the effectiveness of our method with tools from neural tangent kernel theory. We empirically demonstrate that our kernel, constructed using a restricted set of row and column parameters, is numerically close to the full-parameter kernel and gives comparable classification performance. Ablation studies are conducted to investigate the impact of different algorithmic choices, including the selection strategy for rows and columns as well as the optimal rank for effective implementation of our method.
Cite
Text
Kowsher et al. "RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates." NeurIPS 2024 Workshops: FITML, 2024.Markdown
[Kowsher et al. "RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates." NeurIPS 2024 Workshops: FITML, 2024.](https://mlanthology.org/neuripsw/2024/kowsher2024neuripsw-rocoft/)BibTeX
@inproceedings{kowsher2024neuripsw-rocoft,
title = {{RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates}},
author = {Kowsher, Md and Esmaeilbeig, Tara and Yu, Chun-Nam and Soltanalian, Mojtaba and Yousefi, Niloofar},
booktitle = {NeurIPS 2024 Workshops: FITML},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/kowsher2024neuripsw-rocoft/}
}