Differentiable Hebbian Plasticity for Continual Learning

Abstract

Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially. We propose a Differentiable Hebbian Plasticity (DHP) Softmax layer which adds a fast learning plastic component to the slow weights of the softmax output layer. The DHP Softmax behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones. We demonstrate the flexibility of our model by combining it with existing well-known consolidation methods to prevent catastrophic forgetting. We evaluate our approach on the Permuted MNIST and Split MNIST benchmarks, and introduce Imbalanced Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift. Our model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.

Cite

Text

Thangarasa et al. "Differentiable Hebbian Plasticity for Continual Learning." ICML 2019 Workshops: AMTL, 2019.

Markdown

[Thangarasa et al. "Differentiable Hebbian Plasticity for Continual Learning." ICML 2019 Workshops: AMTL, 2019.](https://mlanthology.org/icmlw/2019/thangarasa2019icmlw-differentiable/)

BibTeX

@inproceedings{thangarasa2019icmlw-differentiable,
  title     = {{Differentiable Hebbian Plasticity for Continual Learning}},
  author    = {Thangarasa, Vithursan and Miconi, Thomas and Taylor, Graham W.},
  booktitle = {ICML 2019 Workshops: AMTL},
  year      = {2019},
  url       = {https://mlanthology.org/icmlw/2019/thangarasa2019icmlw-differentiable/}
}