SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
Abstract
Sparse autoencoders (SAEs) are a popular technique for interpreting language model activations, and there is extensive recent work on improving SAE effectiveness. However, most prior work evaluates progress using unsupervised proxy metrics with unclear practical relevance. We introduce SAEBench, a comprehensive evaluation suite that measures SAE performance across eight diverse metrics, spanning interpretability, feature disentanglement and practical applications like unlearning. To enable systematic comparison, we open-source a suite of over 200 SAEs across seven recently proposed SAE architectures and training algorithms. Our evaluation reveals that gains on proxy metrics do not reliably translate to better practical performance. For instance, while Matryoshka SAEs slightly underperform on existing proxy metrics, they substantially outperform other architectures on feature disentanglement metrics; moreover, this advantage grows with SAE scale. By providing a standardized framework for measuring progress in SAE development, SAEBench enables researchers to study scaling trends and make nuanced comparisons between different SAE architectures and training methodologies. Our interactive interface enables researchers to flexibly visualize relationships between metrics across hundreds of open-source SAEs at www.neuronpedia.org/sae-bench
Cite
Text
Karvonen et al. "SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Karvonen et al. "SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/karvonen2025icml-saebench/)BibTeX
@inproceedings{karvonen2025icml-saebench,
title = {{SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability}},
author = {Karvonen, Adam and Rager, Can and Lin, Johnny and Tigges, Curt and Bloom, Joseph Isaac and Chanin, David and Lau, Yeu-Tong and Farrell, Eoin and Mcdougall, Callum Stuart and Ayonrinde, Kola and Till, Demian and Wearden, Matthew and Conmy, Arthur and Marks, Samuel and Nanda, Neel},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {29223-29264},
volume = {267},
url = {https://mlanthology.org/icml/2025/karvonen2025icml-saebench/}
}