Concept Bottleneck Large Language Models

Abstract

We introduce Concept Bottleneck Large Language Models (CB-LLMs), a novel framework for building inherently interpretable Large Language Models (LLMs). In contrast to traditional black-box LLMs that rely on limited post-hoc interpretations, CB-LLMs integrate intrinsic interpretability directly into the LLMs -- allowing accurate explanations with scalability and transparency. We build CB-LLMs for two essential NLP tasks: text classification and text generation. In text classification, CB-LLMs is competitive with, and at times outperforms, traditional black-box models while providing explicit and interpretable reasoning. For the more challenging task of text generation, interpretable neurons in CB-LLMs enable precise concept detection, controlled generation, and safer outputs. The embedded interpretability empowers users to transparently identify harmful content, steer model behavior, and unlearn undesired concepts -- significantly enhancing the safety, reliability, and trustworthiness of LLMs, which are critical capabilities notably absent in existing language models.

Cite

Text

Sun et al. "Concept Bottleneck Large Language Models." International Conference on Learning Representations, 2025.

Markdown

[Sun et al. "Concept Bottleneck Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/sun2025iclr-concept/)

BibTeX

@inproceedings{sun2025iclr-concept,
  title     = {{Concept Bottleneck Large Language Models}},
  author    = {Sun, Chung-En and Oikarinen, Tuomas and Ustun, Berk and Weng, Tsui-Wei},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/sun2025iclr-concept/}
}