Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation
Abstract
Catastrophic forgetting is a significant challenge in online continual learning (OCL), especially for non-stationary data streams that do not have well-defined task boundaries. This challenge is exacerbated by the memory constraints and privacy concerns inherent in rehearsal buffers. To tackle catastrophic forgetting, in this paper, we introduce Online-LoRA, a novel framework for task-free OCL. Online-LoRA allows to finetune pre-trained Vision Transformer (ViT) models in real-time to address the limitations of rehearsal buffers and leverage pre-trained models’ performance benefits. As the main contribution, our approach features a novel online weight regularization strategy to identify and consolidate important model parameters. Moreover, Online-LoRA leverages the training dynamics of loss values to enable the automatic recognition of the data distribution shifts. Extensive experiments across many task-free OCL scenarios and benchmark datasets demonstrate that Online-LoRA can be robustly adapted to various ViT architectures, while achieving better performance compared to SOTA methods.
Cite
Text
Wei et al. "Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation." NeurIPS 2024 Workshops: Continual_FoMo, 2024.Markdown
[Wei et al. "Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation." NeurIPS 2024 Workshops: Continual_FoMo, 2024.](https://mlanthology.org/neuripsw/2024/wei2024neuripsw-onlinelora/)BibTeX
@inproceedings{wei2024neuripsw-onlinelora,
title = {{Online-LoRA: Task-Free Online Continual Learning via Low Rank Adaptation}},
author = {Wei, Xiwen and Li, Guihong and Marculescu, Radu},
booktitle = {NeurIPS 2024 Workshops: Continual_FoMo},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/wei2024neuripsw-onlinelora/}
}