Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks

Abstract

Large Language Models (LLMs) have demonstrated impressive abilities in symbol processing through in-context learning (ICL). This success flies in the face of decades of critiques asserting that artificial neural networks cannot master abstract symbol manipulation. We seek to understand the mechanisms that can enable robust symbol processing in transformer networks, illuminating both the unanticipated success, and the significant limitations, of transformers in symbol processing. Borrowing insights from symbolic AI and cognitive science on the power of Production System architectures, we develop a high-level Production System Language, PSL, that allows us to write symbolic programs to do complex, abstract symbol processing, and create compilers that precisely implement PSL programs in transformer networks which are, by construction, 100% mechanistically interpretable. The work is driven by study of a purely abstract (semantics-free) symbolic task that we develop, Templatic Generation (TGT). Although developed through study of TGT, PSL is, we demonstrate, highly general: it is Turing Universal. The new type of transformer architecture that we compile from PSL programs suggests a number of paths for enhancing transformers’ capabilities at symbol processing. We note, however, that the work we report addresses computability, and not learnability, by transformer networks.

Cite

Text

Smolensky et al. "Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks." Journal of Artificial Intelligence Research, 2025. doi:10.1613/JAIR.1.17469

Markdown

[Smolensky et al. "Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks." Journal of Artificial Intelligence Research, 2025.](https://mlanthology.org/jair/2025/smolensky2025jair-mechanisms/) doi:10.1613/JAIR.1.17469

BibTeX

@article{smolensky2025jair-mechanisms,
  title     = {{Mechanisms of Symbol Processing for In-Context Learning in Transformer Networks}},
  author    = {Smolensky, Paul and Fernandez, Roland and Zhou, Zhenghao Herbert and Opper, Mattia and Davies, Adam and Gao, Jianfeng},
  journal   = {Journal of Artificial Intelligence Research},
  year      = {2025},
  doi       = {10.1613/JAIR.1.17469},
  volume    = {84},
  url       = {https://mlanthology.org/jair/2025/smolensky2025jair-mechanisms/}
}