Optimal Ablation for Interpretability

Abstract

Interpretability studies often involve tracing the flow of information through machine learning models to identify specific model components that perform relevant computations for tasks of interest. Prior work quantifies the importance of a model component on a particular task by measuring the impact of performing ablation on that component, or simulating model inference with the component disabled. We propose a new method, optimal ablation (OA), and show that OA-based component importance has theoretical and empirical advantages over measuring importance via other ablation methods. We also show that OA-based component importance can benefit several downstream interpretability tasks, including circuit discovery, localization of factual recall, and latent prediction.

Cite

Text

Li and Janson. "Optimal Ablation for Interpretability." Neural Information Processing Systems, 2024. doi:10.52202/079017-3468

Markdown

[Li and Janson. "Optimal Ablation for Interpretability." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/li2024neurips-optimal/) doi:10.52202/079017-3468

BibTeX

@inproceedings{li2024neurips-optimal,
  title     = {{Optimal Ablation for Interpretability}},
  author    = {Li, Maximilian and Janson, Lucas},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3468},
  url       = {https://mlanthology.org/neurips/2024/li2024neurips-optimal/}
}