Attention Approximates Sparse Distributed Memory

Abstract

While Attention has come to be an important mechanism in deep learning, there remains limited intuition for why it works so well. Here, we show that Transformer Attention can be closely related under certain data conditions to Kanerva's Sparse Distributed Memory (SDM), a biologically plausible associative memory model. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.

Cite

Text

Bricken and Pehlevan. "Attention Approximates Sparse Distributed Memory." Neural Information Processing Systems, 2021.

Markdown

[Bricken and Pehlevan. "Attention Approximates Sparse Distributed Memory." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/bricken2021neurips-attention/)

BibTeX

@inproceedings{bricken2021neurips-attention,
  title     = {{Attention Approximates Sparse Distributed Memory}},
  author    = {Bricken, Trenton and Pehlevan, Cengiz},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/bricken2021neurips-attention/}
}