FLEX: Faithful Linguistic Explanations for Neural Net Based Model Decisions
Abstract
Explaining the decisions of a Deep Learning Network is imperative to safeguard end-user trust. Such explanations must be intuitive, descriptive, and faithfully explain why a model makes its decisions. In this work, we propose a framework called FLEX (Faithful Linguistic EXplanations) that generates post-hoc linguistic justifications to rationalize the decision of a Convolutional Neural Network. FLEX explains a model’s decision in terms of features that are responsible for the decision. We derive a novel way to associate such features to words, and introduce a new decision-relevance metric that measures the faithfulness of an explanation to a model’s reasoning. Experiment results on two benchmark datasets demonstrate that the proposed framework can generate discriminative and faithful explanations compared to state-of-the-art explanation generators. We also show how FLEX can generate explanations for images of unseen classes as well as automatically annotate objects in images.
Cite
Text
Wickramanayake et al. "FLEX: Faithful Linguistic Explanations for Neural Net Based Model Decisions." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33012539Markdown
[Wickramanayake et al. "FLEX: Faithful Linguistic Explanations for Neural Net Based Model Decisions." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/wickramanayake2019aaai-flex/) doi:10.1609/AAAI.V33I01.33012539BibTeX
@inproceedings{wickramanayake2019aaai-flex,
title = {{FLEX: Faithful Linguistic Explanations for Neural Net Based Model Decisions}},
author = {Wickramanayake, Sandareka and Hsu, Wynne and Lee, Mong-Li},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {2539-2546},
doi = {10.1609/AAAI.V33I01.33012539},
url = {https://mlanthology.org/aaai/2019/wickramanayake2019aaai-flex/}
}