Understanding Transformer Reasoning Capabilities via Graph Algorithms

Abstract

Which transformer scaling regimes are able to perfectly solve different classes of algorithmic problems? While tremendous empirical advances have been attained by transformer-based neural networks, a theoretical understanding of their algorithmic reasoning capabilities in realistic parameter regimes is lacking. We investigate this question in terms of the network’s depth, width, and number of extra tokens for algorithm execution. Our novel representational hierarchy separates 9 algorithmic reasoning problems into classes solvable by transformers in different realistic parameter scaling regimes. We prove that logarithmic depth is necessary and sufficient for tasks like graph connectivity, while single-layer transformers with small embedding dimensions can solve contextual retrieval tasks. We also support our theoretical analysis with ample empirical evidence using the GraphQA benchmark. These results show that transformers excel at many graph reasoning tasks, even outperforming specialized graph neural networks.

Cite

Text

Sanford et al. "Understanding Transformer Reasoning Capabilities via Graph Algorithms." Neural Information Processing Systems, 2024. doi:10.52202/079017-2490

Markdown

[Sanford et al. "Understanding Transformer Reasoning Capabilities via Graph Algorithms." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/sanford2024neurips-understanding/) doi:10.52202/079017-2490

BibTeX

@inproceedings{sanford2024neurips-understanding,
  title     = {{Understanding Transformer Reasoning Capabilities via Graph Algorithms}},
  author    = {Sanford, Clayton and Fatemi, Bahare and Hall, Ethan and Tsitsulin, Anton and Kazemi, Mehran and Halcrow, Jonathan and Perozzi, Bryan and Mirrokni, Vahab},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2490},
  url       = {https://mlanthology.org/neurips/2024/sanford2024neurips-understanding/}
}