Constant Memory Attention Block

Abstract

Modern foundation model architectures rely on attention mechanisms to effectively capture context. However, these methods require linear or quadratic memory in terms of the number of inputs/datapoints, limiting their applicability in low-compute domains. In this work, we propose Constant Memory Attention Block (CMAB), a novel general-purpose attention block that computes its output in constant memory and performs updates in constant computation. Highlighting CMABs efficacy, we introduce methods for Neural Processes and Temporal Point Processes. Empirically, we show our proposed methods achieve results competitive with state-of-the-art while being significantly more memory efficient.

Cite

Text

Feng et al. "Constant Memory Attention Block." ICML 2023 Workshops: ES-FoMO, 2023.

Markdown

[Feng et al. "Constant Memory Attention Block." ICML 2023 Workshops: ES-FoMO, 2023.](https://mlanthology.org/icmlw/2023/feng2023icmlw-constant/)

BibTeX

@inproceedings{feng2023icmlw-constant,
  title     = {{Constant Memory Attention Block}},
  author    = {Feng, Leo and Tung, Frederick and Hajimirsadeghi, Hossein and Bengio, Yoshua and Ahmed, Mohamed Osama},
  booktitle = {ICML 2023 Workshops: ES-FoMO},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/feng2023icmlw-constant/}
}