Interpreting Vision Transformers via Residual Replacement Model

Abstract

How do vision transformers (ViTs) represent and process the world? This paper addresses this long-standing question through the first systematic analysis of 6.6K features across all layers, extracted via sparse autoencoders, and by introducing the residual replacement model, which replaces ViT computations with interpretable features in the residual stream. Our analysis reveals not only a feature evolution from low-level patterns to high-level semantics, but also how ViTs encode curves and spatial positions through specialized feature types. The residual replacement model scalably produces a faithful yet parsimonious circuit for human-scale interpretability by significantly simplifying the original computations. As a result, this framework enables intuitive understanding of ViT mechanisms. Finally, we demonstrate the utility of our framework in debiasing spurious correlations.

Cite

Text

Kim et al. "Interpreting Vision Transformers via Residual Replacement Model." Advances in Neural Information Processing Systems, 2025.

Markdown

[Kim et al. "Interpreting Vision Transformers via Residual Replacement Model." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/kim2025neurips-interpreting/)

BibTeX

@inproceedings{kim2025neurips-interpreting,
  title     = {{Interpreting Vision Transformers via Residual Replacement Model}},
  author    = {Kim, Jinyeong and Kim, Junhyeok and Shim, Yumin and Kim, Joohyeok and Jung, Sunyoung and Hwang, Seong Jae},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/kim2025neurips-interpreting/}
}