A Closer Look at Rehearsal-Free Continual Learning

Abstract

Continual learning is a setting where machine learning models learn novel concepts from continuously shifting training data, while simultaneously avoiding degradation of knowledge on previously seen classes which may disappear from the training data for extended periods of time (a phenomenon known as the catastrophic forgetting problem). Current approaches for continual learning of a single expanding task (aka class-incremental continual learning) require extensive rehearsal of previously seen data to avoid this degradation of knowledge. Unfortunately, rehearsal comes at a cost to memory, and it may also violate data-privacy. Instead, we explore combining knowledge distillation and parameter regularization in new ways to achieve strong continual learning performance without rehearsal. Specifically, we take a deep dive into common continual learning techniques: prediction distillation, feature distillation, L2 parameter regularization, and EWC parameter regularization. We first disprove the common assumption that parameter regularization techniques fail for rehearsal-free continual learning of a single, expanding task. Next, we explore how to leverage knowledge from a pre-trained model in rehearsal-free continual learning and find that vanilla L2 parameter regularization outperforms EWC parameter regularization and feature distillation. Finally, we explore the recently popular ImageNet-R benchmark, and show that L2 parameter regularization implemented in self-attention blocks of a ViT transformer outperforms recent popular prompting for continual learning methods.

Cite

Text

Smith et al. "A Closer Look at Rehearsal-Free Continual Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00239

Markdown

[Smith et al. "A Closer Look at Rehearsal-Free Continual Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/smith2023cvprw-closer/) doi:10.1109/CVPRW59228.2023.00239

BibTeX

@inproceedings{smith2023cvprw-closer,
  title     = {{A Closer Look at Rehearsal-Free Continual Learning}},
  author    = {Smith, James Seale and Tian, Junjiao and Halbe, Shaunak and Hsu, Yen-Chang and Kira, Zsolt},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {2410-2420},
  doi       = {10.1109/CVPRW59228.2023.00239},
  url       = {https://mlanthology.org/cvprw/2023/smith2023cvprw-closer/}
}