DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models

Abstract

As large language models (LLMs) become more capable, there is an urgent need for interpretable and transparent tools. Current methods are difficult to implement, and accessible tools to analyze model internals are lacking. To bridge this gap, we present DeepDecipher - an API and interface for probing neurons in transformer models' MLP layers. DeepDecipher makes the outputs of advanced interpretability techniques readily available for LLMs. The easy-to-use interface also makes inspecting these complex models more intuitive. This paper outlines DeepDecipher's design and capabilities. We demonstrate how to analyze neurons, compare models, and gain insights into model behavior. For example, we contrast DeepDecipher's functionality with similar tools like Neuroscope and OpenAI's Neuron Explainer. DeepDecipher enables efficient, scalable analysis of LLMs. By granting access to state-of-the-art interpretability methods, DeepDecipher makes LLMs more transparent, trustworthy, and safe. Researchers, engineers, and developers can quickly diagnose issues, audit systems, and advance the field.

Cite

Text

Garde et al. "DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models." NeurIPS 2023 Workshops: XAIA, 2023.

Markdown

[Garde et al. "DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/garde2023neuripsw-deepdecipher/)

BibTeX

@inproceedings{garde2023neuripsw-deepdecipher,
  title     = {{DeepDecipher: Accessing and Investigating Neuron Activation in Large Language Models}},
  author    = {Garde, Albert and Kran, Esben and Barez, Fazl},
  booktitle = {NeurIPS 2023 Workshops: XAIA},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/garde2023neuripsw-deepdecipher/}
}