BayesPCN: A Continually Learnable Predictive Coding Associative Memory

Abstract

Associative memory plays an important role in human intelligence and its mechanisms have been linked to attention in machine learning. While the machine learning community's interest in associative memories has recently been rekindled, most work has focused on memory recall ($read$) over memory learning ($write$). In this paper, we present BayesPCN, a hierarchical associative memory capable of performing continual one-shot memory writes without meta-learning. Moreover, BayesPCN is able to gradually forget past observations ($forget$) to free its memory. Experiments show that BayesPCN can recall corrupted i.i.d. high-dimensional data observed hundreds to a thousand ``timesteps'' ago without a large drop in recall ability compared to the state-of-the-art offline-learned parametric memory models.

Cite

Text

Yoo and Wood. "BayesPCN: A Continually Learnable Predictive Coding Associative Memory." Neural Information Processing Systems, 2022.

Markdown

[Yoo and Wood. "BayesPCN: A Continually Learnable Predictive Coding Associative Memory." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/yoo2022neurips-bayespcn/)

BibTeX

@inproceedings{yoo2022neurips-bayespcn,
  title     = {{BayesPCN: A Continually Learnable Predictive Coding Associative Memory}},
  author    = {Yoo, Jinsoo and Wood, Frank},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/yoo2022neurips-bayespcn/}
}