Natural Continual Learning: Success Is a Journey, Not (just) a Destination

Abstract

Biological agents are known to learn many different tasks over the course of their lives, and to be able to revisit previous tasks and behaviors with little to no loss in performance. In contrast, artificial agents are prone to ‘catastrophic forgetting’ whereby performance on previous tasks deteriorates rapidly as new ones are acquired. This shortcoming has recently been addressed using methods that encourage parameters to stay close to those used for previous tasks. This can be done by (i) using specific parameter regularizers that map out suitable destinations in parameter space, or (ii) guiding the optimization journey by projecting gradients into subspaces that do not interfere with previous tasks. However, these methods often exhibit subpar performance in both feedforward and recurrent neural networks, with recurrent networks being of interest to the study of neural dynamics supporting biological continual learning. In this work, we propose Natural Continual Learning (NCL), a new method that unifies weight regularization and projected gradient descent. NCL uses Bayesian weight regularization to encourage good performance on all tasks at convergence and combines this with gradient projection using the prior precision, which prevents catastrophic forgetting during optimization. Our method outperforms both standard weight regularization techniques and projection based approaches when applied to continual learning problems in feedforward and recurrent networks. Finally, the trained networks evolve task-specific dynamics that are strongly preserved as new tasks are learned, similar to experimental findings in biological circuits.

Cite

Text

Kao et al. "Natural Continual Learning: Success Is a Journey, Not (just) a Destination." Neural Information Processing Systems, 2021.

Markdown

[Kao et al. "Natural Continual Learning: Success Is a Journey, Not (just) a Destination." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/kao2021neurips-natural/)

BibTeX

@inproceedings{kao2021neurips-natural,
  title     = {{Natural Continual Learning: Success Is a Journey, Not (just) a Destination}},
  author    = {Kao, Ta-Chu and Jensen, Kristopher and van de Ven, Gido and Bernacchia, Alberto and Hennequin, Guillaume},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/kao2021neurips-natural/}
}