Class-Incremental Learning with Strong Pre-Trained Models

Abstract

Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes). Instead, we explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes. We hypothesize that a strong base model can provide a good representation for novel classes and incremental learning can be done with small adaptations. We propose a 2-stage training scheme, i) feature augmentation - cloning part of the backbone and fine-tuning it on the novel data, and ii) fusion - combining the base and novel classifiers into a unified classifier. Experiments show that the proposed method significantly outperforms state-of-the-art CIL methods on the large-scale ImageNet dataset (e.g. +10% overall accuracy than the best). We also propose and analyze understudied practical CIL scenarios, such as base-novel overlap with distribution shift. Our proposed method is robust and generalizes to all analyzed CIL settings.

Cite

Text

Wu et al. "Class-Incremental Learning with Strong Pre-Trained Models." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00938

Markdown

[Wu et al. "Class-Incremental Learning with Strong Pre-Trained Models." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/wu2022cvpr-classincremental/) doi:10.1109/CVPR52688.2022.00938

BibTeX

@inproceedings{wu2022cvpr-classincremental,
  title     = {{Class-Incremental Learning with Strong Pre-Trained Models}},
  author    = {Wu, Tz-Ying and Swaminathan, Gurumurthy and Li, Zhizhong and Ravichandran, Avinash and Vasconcelos, Nuno and Bhotika, Rahul and Soatto, Stefano},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {9601-9610},
  doi       = {10.1109/CVPR52688.2022.00938},
  url       = {https://mlanthology.org/cvpr/2022/wu2022cvpr-classincremental/}
}