Parameter Efficient Continual Learning with Dynamic Low- Rank Adaptation
Abstract
Catastrophic forgetting has remained a critical challenge for deep neural networks in Continual Learning (CL) as it undermines consolidated knowledge when learning new tasks. Parameter efficient fine-tuning CL techniques are gaining traction for their effectiveness in addressing catastrophic forgetting with lightweight training schedule while avoiding degradation of consolidated knowledge in pre-trained models. However, low-rank adapters (LoRA) in these approaches are highly sensitive to rank selection as it can lead to sub-optimal resource allocation and performance. To this end, we introduce PEARL, a rehearsal-free CL framework that entails dynamic rank allocation for LoRA components during CL training. Specifically, PEARL leverages reference task weights and adaptively determines the rank of task-specific LoRA components based on the current task’s proximity to reference task weights in parameter space. To demonstrate the versatility of PEARL, we evaluate PEARL across three vision architectures (ResNet, Separable Convolutional Network, and Vision Transformer) and a multitude of CL scenarios, and show that PEARL outperforms all considered baselines by a large margin.
Cite
Text
Bhat et al. "Parameter Efficient Continual Learning with Dynamic Low- Rank Adaptation." Transactions on Machine Learning Research, 2026.Markdown
[Bhat et al. "Parameter Efficient Continual Learning with Dynamic Low- Rank Adaptation." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/bhat2026tmlr-parameter/)BibTeX
@article{bhat2026tmlr-parameter,
title = {{Parameter Efficient Continual Learning with Dynamic Low- Rank Adaptation}},
author = {Bhat, Prashant Shivaram and Yazdani, Shakib and Arani, Elahe and Zonooz, Bahram},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/bhat2026tmlr-parameter/}
}