ConceptX: A Framework for Latent Concept Analysis

Abstract

The opacity of deep neural networks remains a challenge in deploying solutions where explanation is as important as precision. We present ConceptX, a human-in-the-loop framework for interpreting and annotating latent representational space in pre-trained Language Models (pLMs). We use an unsupervised method to discover concepts learned in these models and enable a graphical interface for humans to generate explanations for the concepts. To facilitate the process, we provide auto-annotations of the concepts (based on traditional linguistic ontologies). Such annotations enable development of a linguistic resource that directly represents latent concepts learned within deep NLP models. These include not just traditional linguistic concepts, but also task-specific or sensitive concepts (words grouped based on gender or religious connotation) that helps the annotators to mark bias in the model. The framework consists of two parts (i) concept discovery and (ii) annotation platform.

Cite

Text

Alam et al. "ConceptX: A Framework for Latent Concept Analysis." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I13.27057

Markdown

[Alam et al. "ConceptX: A Framework for Latent Concept Analysis." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/alam2023aaai-conceptx/) doi:10.1609/AAAI.V37I13.27057

BibTeX

@inproceedings{alam2023aaai-conceptx,
  title     = {{ConceptX: A Framework for Latent Concept Analysis}},
  author    = {Alam, Firoj and Dalvi, Fahim and Durrani, Nadir and Sajjad, Hassan and Khan, Abdul Rafae and Xu, Jia},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {16395-16397},
  doi       = {10.1609/AAAI.V37I13.27057},
  url       = {https://mlanthology.org/aaai/2023/alam2023aaai-conceptx/}
}