Sparse Autoencoders for Hypothesis Generation
Abstract
We describe HypotheSAEs, a general method to hypothesize interpretable relationships between text data (e.g., headlines) and a target variable (e.g., clicks). HypotheSAEs has three steps: (1) train a sparse autoencoder on text embeddings to produce interpretable features describing the data distribution, (2) select features that predict the target variable, and (3) generate a natural language interpretation of each feature (e.g., mentions being surprised or shocked) using an LLM. Each interpretation serves as a hypothesis about what predicts the target variable. Compared to baselines, our method better identifies reference hypotheses on synthetic datasets (at least +0.06 in F1) and produces more predictive hypotheses on real datasets ( twice as many significant findings), despite requiring 1-2 orders of magnitude less compute than recent LLM-based methods. HypotheSAEs also produces novel discoveries on two well-studied tasks: explaining partisan differences in Congressional speeches and identifying drivers of engagement with online headlines.
Cite
Text
Movva et al. "Sparse Autoencoders for Hypothesis Generation." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Movva et al. "Sparse Autoencoders for Hypothesis Generation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/movva2025icml-sparse/)BibTeX
@inproceedings{movva2025icml-sparse,
title = {{Sparse Autoencoders for Hypothesis Generation}},
author = {Movva, Rajiv and Peng, Kenny and Garg, Nikhil and Kleinberg, Jon and Pierson, Emma},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {44997-45023},
volume = {267},
url = {https://mlanthology.org/icml/2025/movva2025icml-sparse/}
}