Estimation of Concept Explanations Should Be Uncertainty Aware

Abstract

Model explanations are very valuable for interpreting and debugging prediction models. We study a specific kind of global explanations called Concept Explanations, where the goal is to interpret a model using human-understandable concepts. Recent advances in multi-modal learning rekindled interest in concept explanations and led to several label-efficient proposals for estimation. However, existing estimation methods are unstable to the choice of concepts or dataset that is used for computing explanations. We observe that instability in explanations is because estimations do not model noise. We propose an uncertainty aware estimation method, which readily improved reliability of the concept explanations. We demonstrate with theoretical analysis and empirical evaluation that explanations computed by our method are stable to the choice of concepts and data shifts while also being label-efficient and faithful.

Cite

Text

Piratla et al. "Estimation of Concept Explanations Should Be Uncertainty Aware." NeurIPS 2023 Workshops: XAIA, 2023.

Markdown

[Piratla et al. "Estimation of Concept Explanations Should Be Uncertainty Aware." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/piratla2023neuripsw-estimation/)

BibTeX

@inproceedings{piratla2023neuripsw-estimation,
  title     = {{Estimation of Concept Explanations Should Be Uncertainty Aware}},
  author    = {Piratla, Vihari and Heo, Juyeon and Singh, Sukriti and Weller, Adrian},
  booktitle = {NeurIPS 2023 Workshops: XAIA},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/piratla2023neuripsw-estimation/}
}