Steering Prototypes with Prompt-Tuning for Rehearsal-Free Continual Learning

Abstract

In the context of continual learning, prototypes--as representative class embeddings--offer advantages in memory conservation and the mitigation of catastrophic forgetting. However, challenges related to semantic drift and prototype interference persist. In this study, we introduce the Contrastive Prototypical Prompt (CPP) approach. Through task-specific prompt-tuning, underpinned by a contrastive learning objective, we effectively address both aforementioned challenges. Our evaluations on four challenging class-incremental benchmarks reveal that CPP achieves a significant 4% to 6% improvement over state-of-the-art methods. Importantly, CPP operates without a rehearsal buffer and narrows the performance divergence between continual and offline joint learning, suggesting an innovative scheme for Transformer-based continual learning systems.

Cite

Text

Li et al. "Steering Prototypes with Prompt-Tuning for Rehearsal-Free Continual Learning." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Li et al. "Steering Prototypes with Prompt-Tuning for Rehearsal-Free Continual Learning." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/li2024wacv-steering/)

BibTeX

@inproceedings{li2024wacv-steering,
  title     = {{Steering Prototypes with Prompt-Tuning for Rehearsal-Free Continual Learning}},
  author    = {Li, Zhuowei and Zhao, Long and Zhang, Zizhao and Zhang, Han and Liu, Di and Liu, Ting and Metaxas, Dimitris N.},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {2523-2533},
  url       = {https://mlanthology.org/wacv/2024/li2024wacv-steering/}
}