Sparse Distributed Memory Is a Continual Learner

Abstract

Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural circuit with the powerful Transformer model, we create a modified Multi-Layered Perceptron (MLP) that is a strong continual learner. We find that every component of our MLP variant translated from biology is necessary for continual learning. Our solution is also free from any memory replay or task information, and introduces novel methods to train sparse networks that may be broadly applicable.

Cite

Text

Bricken et al. "Sparse Distributed Memory Is a Continual Learner." International Conference on Learning Representations, 2023.

Markdown

[Bricken et al. "Sparse Distributed Memory Is a Continual Learner." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/bricken2023iclr-sparse/)

BibTeX

@inproceedings{bricken2023iclr-sparse,
  title     = {{Sparse Distributed Memory Is a Continual Learner}},
  author    = {Bricken, Trenton and Davies, Xander and Singh, Deepak and Krotov, Dmitry and Kreiman, Gabriel},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/bricken2023iclr-sparse/}
}