Analyze Feature Flow to Enhance Interpretation and Steering in Language Models

Abstract

We introduce a new approach to systematically map features discovered by sparse autoencoder across consecutive layers of large language models, extending earlier work that examined inter-layer feature links. By using a data-free cosine similarity technique, we trace how specific features persist, transform, or first appear at each stage. This method yields granular flow graphs of feature evolution, enabling fine-grained interpretability and mechanistic insights into model computations. Crucially, we demonstrate how these cross-layer feature maps facilitate direct steering of model behavior by amplifying or suppressing chosen features, achieving targeted thematic control in text generation. Together, our findings highlight the utility of a causal, cross-layer interpretability framework that not only clarifies how features develop through forward passes but also provides new means for transparent manipulation of large language models.

Cite

Text

Laptev et al. "Analyze Feature Flow to Enhance Interpretation and Steering in Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Laptev et al. "Analyze Feature Flow to Enhance Interpretation and Steering in Language Models." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/laptev2025icml-analyze/)

BibTeX

@inproceedings{laptev2025icml-analyze,
  title     = {{Analyze Feature Flow to Enhance Interpretation and Steering in Language Models}},
  author    = {Laptev, Daniil and Balagansky, Nikita and Aksenov, Yaroslav and Gavrilov, Daniil},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {32593-32616},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/laptev2025icml-analyze/}
}