PLAN: Proactive Low-Rank Allocation for Continual Learning

Abstract

Continual learning (CL) requires models to continuously adapt to new tasks without forgetting past knowledge. In this work, we propose \underline P roactive \underline L ow-rank \underline A llocatio\underline N (PLAN), a framework that extends Low-Rank Adaptation (LoRA) to enable efficient and interference-aware fine-tuning of large pre-trained models in CL settings. PLAN proactively manages the allocation of task-specific subspaces by introducing orthogonal basis vectors for each task and optimizing them through a perturbation-based strategy that minimizes conflicts with previously learned parameters. Furthermore, PLAN incorporates a novel selection mechanism that identifies and assigns basis vectors with minimal sensitivity to interference, reducing the risk of degrading past knowledge while maintaining efficient adaptation to new tasks. Empirical results on standard CL benchmarks demonstrate that PLAN consistently outperforms existing methods, establishing a new state-of-the-art for continual learning with foundation models.

Cite

Text

Wang et al. "PLAN: Proactive Low-Rank Allocation for Continual Learning." International Conference on Computer Vision, 2025.

Markdown

[Wang et al. "PLAN: Proactive Low-Rank Allocation for Continual Learning." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/wang2025iccv-plan/)

BibTeX

@inproceedings{wang2025iccv-plan,
  title     = {{PLAN: Proactive Low-Rank Allocation for Continual Learning}},
  author    = {Wang, Xiequn and Zhuang, Zhan and Zhang, Yu},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {2909-2918},
  url       = {https://mlanthology.org/iccv/2025/wang2025iccv-plan/}
}