Crafting Large Language Models for Enhanced Interpretability
Abstract
We introduce the Concept Bottleneck Large Language Model (CB-LLM), a pioneering approach to creating inherently interpretable Large Language Models (LLMs). Unlike traditional black-box LLMs that rely on post-hoc interpretation methods with limited neuron function insights, CB-LLM sets a new standard with its built-in interpretability, scalability, and ability to provide clear, accurate explanations. This innovation not only advances transparency in language models but also enhances their effectiveness. Our unique Automatic Concept Correction (ACC) strategy successfully narrows the performance gap with conventional black-box LLMs, positioning CB-LLM as a model that combines the high accuracy of traditional LLMs with the added benefit of clear interpretability --- a feature markedly absent in existing LLMs.
Cite
Text
Sun et al. "Crafting Large Language Models for Enhanced Interpretability." ICML 2024 Workshops: MI, 2024.Markdown
[Sun et al. "Crafting Large Language Models for Enhanced Interpretability." ICML 2024 Workshops: MI, 2024.](https://mlanthology.org/icmlw/2024/sun2024icmlw-crafting/)BibTeX
@inproceedings{sun2024icmlw-crafting,
title = {{Crafting Large Language Models for Enhanced Interpretability}},
author = {Sun, Chung-En and Oikarinen, Tuomas and Weng, Tsui-Wei},
booktitle = {ICML 2024 Workshops: MI},
year = {2024},
url = {https://mlanthology.org/icmlw/2024/sun2024icmlw-crafting/}
}