Enhancing Concept Localization in CLIP-Based Concept Bottleneck Models
Abstract
This paper addresses explainable AI (XAI) through the lens of Concept Bottleneck Models (CBMs) that do not require explicit concept annotations, relying instead on concepts extracted using CLIP in a zero-shot manner. We show that CLIP, which is central in these techniques, is prone to concept hallucination—incorrectly predicting the presence or absence of concepts within an image in scenarios used in numerous CBMs, hence undermining the faithfulness of explanations. To mitigate this issue, we introduce Concept Hallucination Inhibition via Localized Interpretability (CHILI), a technique that disentangles image embeddings and localizes pixels corresponding to target concepts. Furthermore, our approach supports the generation of saliency-based explanations that are more interpretable.
Cite
Text
Kazmierczak et al. "Enhancing Concept Localization in CLIP-Based Concept Bottleneck Models." Transactions on Machine Learning Research, 2026.Markdown
[Kazmierczak et al. "Enhancing Concept Localization in CLIP-Based Concept Bottleneck Models." Transactions on Machine Learning Research, 2026.](https://mlanthology.org/tmlr/2026/kazmierczak2026tmlr-enhancing/)BibTeX
@article{kazmierczak2026tmlr-enhancing,
title = {{Enhancing Concept Localization in CLIP-Based Concept Bottleneck Models}},
author = {Kazmierczak, Rémi and Azzolin, Steve and Frehse, Goran and Berthier, Eloïse and Franchi, Gianni},
journal = {Transactions on Machine Learning Research},
year = {2026},
url = {https://mlanthology.org/tmlr/2026/kazmierczak2026tmlr-enhancing/}
}