Using Degeneracy in the Loss Landscape for Mechanistic Interpretability
Abstract
Mechanistic Interpretability aims to reverse engineer the algorithms implemented by neural networks by studying their weights and activations. An obstacle to reverse engineering neural networks is that many of the parameters inside a network are not involved in the computation being implemented by the network. These degenerate parameters may obfuscate internal structure. Singular Learning Theory teaches us that neural network parameterizations are biased towards being more degenerate, and parameterizations with more degeneracy are likely to generalize further. We identify 3 ways that network parameters can be degenerate: linear dependence between activations in a layer; linear dependence between gradients passed back to a layer; ReLUs which fire on the same subset of datapoints. We propose that if we can represent a neural network in a way that is invariant to reparameterizations that exploit the degeneracies, then this representation is likely to be more interpretable. We introduce the Interaction Basis, a tractable technique to obtain a representation that is invariant to degeneracies from linear dependence of activations or Jacobians.
Cite
Text
Bushnaq et al. "Using Degeneracy in the Loss Landscape for Mechanistic Interpretability." ICML 2024 Workshops: MI, 2024.Markdown
[Bushnaq et al. "Using Degeneracy in the Loss Landscape for Mechanistic Interpretability." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/bushnaq2024icmlw-using/)BibTeX
@inproceedings{bushnaq2024icmlw-using,
title = {{Using Degeneracy in the Loss Landscape for Mechanistic Interpretability}},
author = {Bushnaq, Lucius and Mendel, Jake and Heimersheim, Stefan and Braun, Dan and Goldowsky-Dill, Nicholas and Hänni, Kaarel and Wu, Cindy and Hobbhahn, Marius},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/bushnaq2024icmlw-using/}
}