Concept-Driven Continual Learning

Abstract

This paper introduces two novel solutions to the challenge of catastrophic forgetting in continual learning: Interpretability Guided Continual Learning (IG-CL) and Intrinsically Interpretable Neural Network (IN2). These frameworks bring interpretability into continual learning, systematically managing human-understandable concepts within neural network models to enhance knowledge retention from previous tasks. Our methods are designed to enhance interpretability, providing transparency and control over the continual training process. While our primary focus is to provide a new framework to design continual learning algorithms based on interpretability instead of improving performance, we observe that our methods often surpass existing ones: IG-CL employs interpretability tools to guide neural networks, showing an improvement of up to 1.4% in average incremental accuracy over existing methods; IN2, inspired by the Concept Bottleneck Model, adeptly adjusts concept units for both new and existing tasks, reducing average incremental forgetting by up to 9.1%. Both our frameworks demonstrate superior performance compared to exemplar-free methods, are competitive with exemplar-based methods, and can further improve their performance by up to 18% when combined with exemplar-based strategies. Additionally, IG-CL and IN2 are memory-efficient as they do not require extra memory space for storing data from previous tasks. These advancements mark a promising new direction in continual learning through enhanced interpretability.

Cite

Text

Yang et al. "Concept-Driven Continual Learning." Transactions on Machine Learning Research, 2024.

Markdown

[Yang et al. "Concept-Driven Continual Learning." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/yang2024tmlr-conceptdriven/)

BibTeX

@article{yang2024tmlr-conceptdriven,
  title     = {{Concept-Driven Continual Learning}},
  author    = {Yang, Sin-Han and Oikarinen, Tuomas and Weng, Tsui-Wei},
  journal   = {Transactions on Machine Learning Research},
  year      = {2024},
  url       = {https://mlanthology.org/tmlr/2024/yang2024tmlr-conceptdriven/}
}