Open Problems in Mechanistic Interpretability

Abstract

Mechanistic interpretability aims to understand the computational mechanisms underlying neural networks' capabilities in order to accomplish concrete scientific and engineering goals. Progress in this field thus promises to provide greater assurance over AI system behavior and shed light on exciting scientific questions about the nature of intelligence. Despite recent progress toward these goals, there are many open problems in the field that require solutions before many scientific and practical benefits can be realized: Our methods require both conceptual and practical improvements to reveal deeper insights; we must figure out how best to apply our methods in pursuit of specific goals; and the field must grapple with socio-technical challenges that influence and are influenced by our work. This forward-facing review discusses the current frontier of mechanistic interpretability and the open problems that the field may benefit from prioritizing.

Cite

Text

Sharkey et al. "Open Problems in Mechanistic Interpretability." Transactions on Machine Learning Research, 2025.

Markdown

[Sharkey et al. "Open Problems in Mechanistic Interpretability." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/sharkey2025tmlr-open/)

BibTeX

@article{sharkey2025tmlr-open,
  title     = {{Open Problems in Mechanistic Interpretability}},
  author    = {Sharkey, Lee and Chughtai, Bilal and Batson, Joshua and Lindsey, Jack and Wu, Jeffrey and Bushnaq, Lucius and Goldowsky-Dill, Nicholas and Heimersheim, Stefan and Ortega, Alejandro and Bloom, Joseph Isaac and Biderman, Stella and Garriga-Alonso, Adrià and Conmy, Arthur and Nanda, Neel and Rumbelow, Jessica Mary and Wattenberg, Martin and Schoots, Nandi and Miller, Joseph and Saunders, William and Michaud, Eric J and Casper, Stephen and Tegmark, Max and Bau, David and Todd, Eric and Geiger, Atticus and Geva, Mor and Hoogland, Jesse and Murfet, Daniel and McGrath, Thomas},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/sharkey2025tmlr-open/}
}