Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning
Abstract
Model diffing is the study of how fine-tuning changes a model's representations and internal algorithms. Many behaviors of interest are introduced during fine-tuning, and model diffing offers a promising lens to interpret such behaviors. Crosscoders are a recent model diffing method that learns a shared dictionary of interpretable concepts represented as latent directions in both the base and fine-tuned models, allowing us to track how concepts shift or emerge during fine-tuning. Notably, prior work has observed concepts with no direction in the base model, and it was hypothesized that these model-specific latents were concepts introduced during fine-tuning. However, we identify two issues which stem from the crosscoders L1 training loss that can misattribute concepts as unique to the fine-tuned model, when they really exist in both models. We develop Latent Scaling to flag these issues by more accurately measuring each latent's presence across models. In experiments comparing Gemma 2 2B base and chat models, we observe that the standard crosscoder suffers heavily from these issues. Building on these insights, we train a crosscoder with BatchTopK loss and show that it substantially mitigates these issues, finding more genuinely chat-specific and highly interpretable concepts. We recommend practitioners adopt similar techniques. Using the BatchTopK crosscoder, we successfully identify a set of chat-specific latents that are both interpretable and causally effective, representing concepts such as false information and personal question, along with multiple refusal-related latents that show nuanced preferences for different refusal triggers. Overall, our work advances best practices for the crosscoder-based methodology for model diffing and demonstrates that it can provide concrete insights into how chat-tuning modifies model behavior.
Cite
Text
Minder et al. "Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning." Advances in Neural Information Processing Systems, 2025.Markdown
[Minder et al. "Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/minder2025neurips-overcoming/)BibTeX
@inproceedings{minder2025neurips-overcoming,
title = {{Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning}},
author = {Minder, Julian and Dumas, Clément and Juang, Caden and Chughtai, Bilal and Nanda, Neel},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/minder2025neurips-overcoming/}
}