Coherence Evaluation of Visual Concepts with Objects and Language

Abstract

Meaningful concepts are the fundamental elements of human reasoning. In explainable AI, they are used to provide concept-based explanations of machine learning models. The concepts are often extracted from large-scale image data sets in an unsupervised manner and are therefore not guaranteed to be meaningful to users. In this work, we investigate to which extent we can automatically assess the meaningfulness of such visual concepts using objects and language as forms of supervision. On the way towards discovering more interpretable concepts, we propose the “Semantic-level, Object and Language-Guided Coherence Evaluation” framework for visual concepts (SOLaCE). SOLaCE assigns semantic meanings in the form of words to automatically discovered visual concepts and evaluates their degree of meaningfulness on this higher level without human effort. We consider the question of whether objects are sufficient as possible meanings, or whether a broader vocabulary including more abstract meanings needs to be considered. By means of a user study, we confirm that our simulated evaluations highly agree with the human perception of coherence. We publicly release our data set containing 2600 human ratings of visual concepts.

Cite

Text

Leemann et al. "Coherence Evaluation of Visual Concepts with Objects and Language." ICLR 2022 Workshops: OSC, 2022.

Markdown

[Leemann et al. "Coherence Evaluation of Visual Concepts with Objects and Language." ICLR 2022 Workshops: OSC, 2022.](https://mlanthology.org/iclrw/2022/leemann2022iclrw-coherence/)

BibTeX

@inproceedings{leemann2022iclrw-coherence,
  title     = {{Coherence Evaluation of Visual Concepts with Objects and Language}},
  author    = {Leemann, Tobias and Rong, Yao and Kraft, Stefan and Kasneci, Enkelejda and Kasneci, Gjergji},
  booktitle = {ICLR 2022 Workshops: OSC},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/leemann2022iclrw-coherence/}
}