ICICLE: Interpretable Class Incremental Continual Learning

Abstract

Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.

Cite

Text

Rymarczyk et al. "ICICLE: Interpretable Class Incremental Continual Learning." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.00181

Markdown

[Rymarczyk et al. "ICICLE: Interpretable Class Incremental Continual Learning." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/rymarczyk2023iccv-icicle/) doi:10.1109/ICCV51070.2023.00181

BibTeX

@inproceedings{rymarczyk2023iccv-icicle,
  title     = {{ICICLE: Interpretable Class Incremental Continual Learning}},
  author    = {Rymarczyk, Dawid and van de Weijer, Joost and Zieliński, Bartosz and Twardowski, Bartlomiej},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {1887-1898},
  doi       = {10.1109/ICCV51070.2023.00181},
  url       = {https://mlanthology.org/iccv/2023/rymarczyk2023iccv-icicle/}
}