Neuron-Based Explanations of Neural Networks Sacrifice Completeness and Interpretability

Abstract

High quality explanations of neural networks (NNs) should exhibit two key properties. Completeness ensures that they accurately reflect a network’s function and interpretability makes them understandable to humans. Many existing methods provide explanations of individual neurons within a network. In this work we provide evidence that for AlexNet pretrained on ImageNet, neuron-based explanation methods sacrifice both completeness and interpretability compared to activation principal components. Neurons are a poor basis for AlexNet embeddings because they don’t account for the distributed nature of these representations. By examining two quantitative measures of completeness and conducting a user study to measure interpretability, we show the most important principal components provide more complete and interpretable explanations than the most important neurons. Much of the activation variance may be explained by examining relatively few high-variance PCs, as opposed to studying every neuron. These principal components also strongly affect network function, and are significantly more interpretable than neurons. Our findings suggest that explanation methods for networks like AlexNet should avoid using neurons as a basis for embeddings and instead choose a basis, such as principal components, which accounts for the high dimensional and distributed nature of a network's internal representations. Interactive demo and code available at https://ndey96.github.io/neuron-explanations-sacrifice.

Cite

Text

Dey et al. "Neuron-Based Explanations of Neural Networks Sacrifice Completeness and Interpretability." Transactions on Machine Learning Research, 2025.

Markdown

[Dey et al. "Neuron-Based Explanations of Neural Networks Sacrifice Completeness and Interpretability." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/dey2025tmlr-neuronbased/)

BibTeX

@article{dey2025tmlr-neuronbased,
  title     = {{Neuron-Based Explanations of Neural Networks Sacrifice Completeness and Interpretability}},
  author    = {Dey, Nolan Simran and Taylor, Eric and Wong, Alexander and Tripp, Bryan P. and Taylor, Graham W.},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/dey2025tmlr-neuronbased/}
}