Depth Extrapolation of Decoders Trained on Nested Structures

Abstract

Reasoning problems with deeply nested formal statements are challenging for humans and machines alike. We investigate how next-token predictors learn such structures, and whether they extrapolate to more deeply nested cases, within a single inference pass. A case study of Boolean logic simplification demonstrates that a specialized decoder Transformer seems to perform well when it overfits, but fails at extrapolating. To understand if this limitation is universal, we propose a theoretical grounding of memorization in a self-attention head. We apply this theory to a simpler problem: completion of a bounded stack of parentheses. From the theoretical construction we derive a closed-form model that perfectly fits a single sequence training set. We prove that it also completes any out-of-sample parentheses prefix, regardless of the context depth. In contrast, we observe that decoder Transformers trained with gradient descent on this task fail at depth extrapolation. Gradient-trained decoders demand large samples and a high-dimensional embedding space to achieve high accuracy on test sets nearly as deep as the training set. However, when the gap between training and test depths widens, gradient-trained models fail.

Cite

Text

Richard. "Depth Extrapolation of Decoders Trained on Nested Structures." NeurIPS 2024 Workshops: M3L, 2024.

Markdown

[Richard. "Depth Extrapolation of Decoders Trained on Nested Structures." NeurIPS 2024 Workshops: M3L, 2024.](https://mlanthology.org/neuripsw/2024/richard2024neuripsw-depth/)

BibTeX

@inproceedings{richard2024neuripsw-depth,
  title     = {{Depth Extrapolation of Decoders Trained on Nested Structures}},
  author    = {Richard, Emile R},
  booktitle = {NeurIPS 2024 Workshops: M3L},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/richard2024neuripsw-depth/}
}