Evaluating the Interpretability of the Knowledge Compilation mAP: Communicating Logical Statements Effectively

Abstract

Knowledge compilation techniques translate propositional theories into equivalent forms to increase their computational tractability. But, how should we best present these propositional theories to a human? We analyze the standard taxonomy of propositional theories for relative interpretability across three model domains: highway driving, emergency triage, and the chopsticks game. We generate decision-making agents which produce logical explanations for their actions and apply knowledge compilation to these explanations. Then, we evaluate how quickly, accurately, and confidently users comprehend the generated explanations. We find that domain, formula size, and negated logical connectives significantly affect comprehension while formula properties typically associated with interpretability are not strong predictors of human ability to comprehend the theory.

Cite

Text

Booth et al. "Evaluating the Interpretability of the Knowledge Compilation mAP: Communicating Logical Statements Effectively." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/804

Markdown

[Booth et al. "Evaluating the Interpretability of the Knowledge Compilation mAP: Communicating Logical Statements Effectively." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/booth2019ijcai-evaluating/) doi:10.24963/IJCAI.2019/804

BibTeX

@inproceedings{booth2019ijcai-evaluating,
  title     = {{Evaluating the Interpretability of the Knowledge Compilation mAP: Communicating Logical Statements Effectively}},
  author    = {Booth, Serena and Muise, Christian and Shah, Julie},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5801-5807},
  doi       = {10.24963/IJCAI.2019/804},
  url       = {https://mlanthology.org/ijcai/2019/booth2019ijcai-evaluating/}
}