B-Cos LM: Efficiently Transforming Pre-Trained Language Models for Improved Explainability
Abstract
Post-hoc explanation methods for black-box models often struggle with faithfulness and human interpretability due to the lack of explainability in current neural architectures. Meanwhile, B-cos networks have been introduced to improve model explainability by proposing an architecture that removes bias terms and promotes input-weight alignment. Although B-cos networks have shown success in building explainable systems, their application has so far been limited to computer vision models and their associated training pipelines. In this work, we introduce B-cos LMs, i.e., B-cos Language Models (LMs) empowered for natural language processing (NLP) tasks. Our approach directly transforms pre-trained language models into B-cos LMs by combining B-cos conversion and task fine-tuning, improving efficiency compared to previous methods. Automatic and human evaluation results demonstrate that B-cos LMs produce more faithful and human interpretable explanations than post-hoc methods, while maintaining task performance comparable to conventional fine-tuning. Our in-depth analysis explores how B-cos LMs differ from conventionally fine-tuned models in their learning processes and explanation patterns. Finally, we present a first exploration of transforming decoder-only models to B-cos LMs for generation tasks. Our code is available at https://github.com/Ewanwong/bcos_lm.
Cite
Text
Wang et al. "B-Cos LM: Efficiently Transforming Pre-Trained Language Models for Improved Explainability." Transactions on Machine Learning Research, 2025.Markdown
[Wang et al. "B-Cos LM: Efficiently Transforming Pre-Trained Language Models for Improved Explainability." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/wang2025tmlr-bcos/)BibTeX
@article{wang2025tmlr-bcos,
title = {{B-Cos LM: Efficiently Transforming Pre-Trained Language Models for Improved Explainability}},
author = {Wang, Yifan and Rao, Sukrut and Lee, Ji-Ung and Jobanputra, Mayank and Demberg, Vera},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/wang2025tmlr-bcos/}
}