Universal Neurons in GPT2 Language Models
Abstract
A basic question within the emerging field of mechanistic interpretability is the degree to which neural networks learn the same underlying mechanisms. In other words, are neural mechanisms universal across different models? In this work, we study the universality of individual neurons across GPT2 models trained from different initial random seeds, motivated by the hypothesis that universal neurons are likely to be interpretable. In particular, we compute pairwise correlations of neuron activations over 100 million tokens for every neuron pair across five different seeds and find that 1-5\% of neurons are universal, that is, pairs of neurons which consistently activate on the same inputs. We then study these universal neurons in detail, finding that they usually have clear interpretations and taxonomize them into a small number of neuron families. We conclude by studying patterns in neuron weights to establish several universal functional roles of neurons in simple circuits: deactivating attention heads, changing the entropy of the next token distribution, and predicting the next token to (not) be within a particular set.
Cite
Text
Gurnee et al. "Universal Neurons in GPT2 Language Models." Transactions on Machine Learning Research, 2024.Markdown
[Gurnee et al. "Universal Neurons in GPT2 Language Models." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/gurnee2024tmlr-universal/)BibTeX
@article{gurnee2024tmlr-universal,
title = {{Universal Neurons in GPT2 Language Models}},
author = {Gurnee, Wes and Horsley, Theo and Guo, Zifan Carl and Kheirkhah, Tara Rezaei and Sun, Qinyi and Hathaway, Will and Nanda, Neel and Bertsimas, Dimitris},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/gurnee2024tmlr-universal/}
}