Interpreting Neurons in Deep Vision Networks with Language Models

Abstract

In this paper, we propose Describe-and-Dissect (DnD), a novel method to describe the roles of hidden neurons in vision networks. DnD utilizes recent advancements in multimodal deep learning to produce complex natural language descriptions, without the need for labeled training data or a predefined set of concepts to choose from. Additionally, DnD is training-free, meaning we don’t train any new models and can easily leverage more capable general purpose models in the future. We have conducted extensive qualitative and quantitative analysis to show that DnD outperforms prior work by providing higher quality neuron descriptions. Specifically, our method on average provides the highest quality labels and is more than 2× as likely to be selected as the best explanation for a neuron than the best baseline. Finally, we present a use case providing critical insights into land cover prediction models for sustainability applications.

Cite

Text

Bai et al. "Interpreting Neurons in Deep Vision Networks with Language Models." Transactions on Machine Learning Research, 2025.

Markdown

[Bai et al. "Interpreting Neurons in Deep Vision Networks with Language Models." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/bai2025tmlr-interpreting/)

BibTeX

@article{bai2025tmlr-interpreting,
  title     = {{Interpreting Neurons in Deep Vision Networks with Language Models}},
  author    = {Bai, Nicholas and Iyer, Rahul Ajay and Oikarinen, Tuomas and Kulkarni, Akshay R. and Weng, Tsui-Wei},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/bai2025tmlr-interpreting/}
}