Understanding Counting in Small Transformers: The Interplay Between Attention and Feed-Forward Layers

Abstract

We provide a comprehensive analysis of simple transformer models trained on the histogram task, where the goal is to count the occurrences of each item in the input sequence from a fixed alphabet. Despite its apparent simplicity, this task exhibits a rich phenomenology that allows us to characterize how different architectural components contribute towards the emergence of distinct algorithmic solutions. In particular, we showcase the existence of two qualitatively different mechanisms that implement a solution, relation- and inventory-based counting. Which solution a model can implement depends non-trivially on the precise choice of the attention mechanism, activation function, memorization capacity and the presence of a beginning-of-sequence token. By introspecting learned models on the counting task, we find evidence for the formation of both mechanisms. From a broader perspective, our analysis offers a framework to understand how the interaction of different architectural components of transformer models shapes diverse algorithmic solutions and approximations.

Cite

Text

Behrens et al. "Understanding Counting in Small Transformers: The Interplay Between Attention and Feed-Forward Layers." ICML 2024 Workshops: MI, 2024.

Markdown

[Behrens et al. "Understanding Counting in Small Transformers: The Interplay Between Attention and Feed-Forward Layers." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/behrens2024icmlw-understanding/)

BibTeX

@inproceedings{behrens2024icmlw-understanding,
  title     = {{Understanding Counting in Small Transformers: The Interplay Between Attention and Feed-Forward Layers}},
  author    = {Behrens, Freya and Biggio, Luca and Zdeborova, Lenka},
  booktitle = {ICML 2024 Workshops: MI},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/behrens2024icmlw-understanding/}
}