DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning

Abstract

Rehearsal-based approaches are a mainstay of continual learning (CL). They mitigate the catastrophic forgetting problem by maintaining a small fixed-size buffer with a subset of data from past tasks. While most rehearsal-based approaches exploit the knowledge from buffered past data, little attention is paid to inter-task relationships and to critical task-specific and task-invariant knowledge. By appropriately leveraging inter-task relationships, we propose a novel CL method, named DualHSIC, to boost the performance of existing rehearsal-based methods in a simple yet effective way. DualHSIC consists of two complementary components that stem from the so-called Hilbert Schmidt independence criterion (HSIC): HSIC-Bottleneck for Rehearsal (HBR) lessens the inter-task interference and HSIC Alignment (HA) promotes task-invariant knowledge sharing. Extensive experiments show that DualHSIC can be seamlessly plugged into existing rehearsal-based methods for consistent performance improvements, outperforming recent state-of-the-art regularization-enhanced rehearsal methods.

Cite

Text

Wang et al. "DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning." International Conference on Machine Learning, 2023.

Markdown

[Wang et al. "DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/wang2023icml-dualhsic/)

BibTeX

@inproceedings{wang2023icml-dualhsic,
  title     = {{DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning}},
  author    = {Wang, Zifeng and Zhan, Zheng and Gong, Yifan and Shao, Yucai and Ioannidis, Stratis and Wang, Yanzhi and Dy, Jennifer},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {36578-36592},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/wang2023icml-dualhsic/}
}