Explaining a Black-Box by Using a Deep Variational Information Bottleneck Approach
Abstract
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
Cite
Text
Bang et al. "Explaining a Black-Box by Using a Deep Variational Information Bottleneck Approach." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I13.17358Markdown
[Bang et al. "Explaining a Black-Box by Using a Deep Variational Information Bottleneck Approach." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/bang2021aaai-explaining/) doi:10.1609/AAAI.V35I13.17358BibTeX
@inproceedings{bang2021aaai-explaining,
title = {{Explaining a Black-Box by Using a Deep Variational Information Bottleneck Approach}},
author = {Bang, Seo-Jin and Xie, Pengtao and Lee, Heewook and Wu, Wei and Xing, Eric P.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {11396-11404},
doi = {10.1609/AAAI.V35I13.17358},
url = {https://mlanthology.org/aaai/2021/bang2021aaai-explaining/}
}