Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers
Abstract
We address the critical challenge of applying feature attribution methods to the transformer architecture, which dominates current applications in natural language processing and beyond. Traditional attribution methods to explainable AI (XAI) explicitly or implicitly rely on linear or additive surrogate models to quantify the impact of input features on a model's output. In this work, we formally prove an alarming incompatibility: transformers are structurally incapable of representing linear or additive surrogate models used for feature attribution, undermining the grounding of these conventional explanation methodologies. To address this discrepancy, we introduce the Softmax-Linked Additive Log Odds Model (SLALOM), a novel surrogate model specifically designed to align with the transformer framework. SLALOM demonstrates the capacity to deliver a range of insightful explanations with both synthetic and real-world datasets. We highlight SLALOM's unique efficiency-quality curve by showing that SLALOM can produce explanations with substantially higher fidelity than competing surrogate models or provide explanations of comparable quality at a fraction of their computational costs. We release code for SLALOM as an open-source project online at https://github.com/tleemann/slalom_explanations.
Cite
Text
Leemann et al. "Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers." Transactions on Machine Learning Research, 2025.Markdown
[Leemann et al. "Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/leemann2025tmlr-attention/)BibTeX
@article{leemann2025tmlr-attention,
title = {{Attention Mechanisms Don’t Learn Additive Models: Rethinking Feature Importance for Transformers}},
author = {Leemann, Tobias and Fastowski, Alina and Pfeiffer, Felix and Kasneci, Gjergji},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/leemann2025tmlr-attention/}
}