Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models
Abstract
We study the computational limits of Low-Rank Adaptation (LoRA) for finetuning transformer-based models using fine-grained complexity theory. Our key observation is that the existence of low-rank decompositions within the gradient computation of LoRA adaptation leads to possible algorithmic speedup. This allows us to (i) identify a phase transition behavior of efficiency \blue{assuming the Strong Exponential Time Hypothesis (SETH)}, and (ii) prove the existence of almost linear algorithms by controlling the LoRA update computation term by term. For the former, we identify a sharp transition in the efficiency of all possible rank-$r$ LoRA update algorithms for transformers, based on specific norms resulting from the multiplications of the input sequence $X$, pretrained weights ${W^\star}$, and adapter matrices $\alpha B A/r$. Specifically, we derive a shared upper bound threshold for such norms and show that efficient (sub-quadratic) approximation algorithms of LoRA exist only below this threshold. For the latter, we prove the existence of almost linear approximation algorithms for LoRA adaptation by utilizing the hierarchical low-rank structures of LoRA gradients and approximating the gradients with a series of chained low-rank approximations. To showcase our theory, we consider two practical scenarios: partial (e.g., only $W_V$ and $W_Q$) and full adaptations (e.g., $W_Q$, $W_V$, and $W_K$) of weights in attention heads.
Cite
Text
Hu et al. "Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models." International Conference on Learning Representations, 2025.Markdown
[Hu et al. "Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/hu2025iclr-computational/)BibTeX
@inproceedings{hu2025iclr-computational,
title = {{Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models}},
author = {Hu, Jerry Yao-Chieh and Su, Maojiang and Kuo, En-jui and Song, Zhao and Liu, Han},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/hu2025iclr-computational/}
}