Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation
Abstract
Catastrophic forgetting is a significant challenge in online continual learning (OCL) especially for non-stationary data streams that do not have well-defined task boundaries. This challenge is exacerbated by the memory constraints and privacy concerns inherent in rehearsal buffers. To tackle catastrophic forgetting in this paper we introduce Online-LoRA a novel framework for task-free OCL. Online-LoRA allows to finetune pre-trained Vision Transformer (ViT) models in real-time to address the limitations of rehearsal buffers and leverage pre-trained models' performance benefits. As the main contribution our approach features a novel online weight regularization strategy to identify and consolidate important model parameters. Moreover Online-LoRA leverages the training dynamics of loss values to enable the automatic recognition of the data distribution shifts. Extensive experiments across many task-free OCL scenarios and benchmark datasets (including CIFAR-100 ImageNet-R ImageNet-S CUB-200 and CORe50) demonstrate that Online-LoRA can be robustly adapted to various ViT architectures while achieving better performance compared to SOTA methods.
Cite
Text
Wei et al. "Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Wei et al. "Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/wei2025wacv-onlinelora/)BibTeX
@inproceedings{wei2025wacv-onlinelora,
title = {{Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation}},
author = {Wei, Xiwen and Li, Guihong and Marculescu, Radu},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {6634-6645},
url = {https://mlanthology.org/wacv/2025/wei2025wacv-onlinelora/}
}