Conversational Explanations of Machine Learning Predictions Through Class-Contrastive Counterfactual Statements
Abstract
Machine learning models have become pervasive in our everyday life; they decide on important matters influencing our education, employment and judicial system. Many of these predictive systems are commercial products protected by trade secrets, hence their decision-making is opaque. Therefore, in our research we address interpretability and explainability of predictions made by machine learning models. Our work draws heavily on human explanation research in social sciences: contrastive and exemplar explanations provided through a dialogue. This user-centric design, focusing on a lay audience rather than domain experts, applied to machine learning allows explainees to drive the explanation to suit their needs instead of being served a precooked template.
Cite
Text
Sokol and Flach. "Conversational Explanations of Machine Learning Predictions Through Class-Contrastive Counterfactual Statements." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/836Markdown
[Sokol and Flach. "Conversational Explanations of Machine Learning Predictions Through Class-Contrastive Counterfactual Statements." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/sokol2018ijcai-conversational/) doi:10.24963/IJCAI.2018/836BibTeX
@inproceedings{sokol2018ijcai-conversational,
title = {{Conversational Explanations of Machine Learning Predictions Through Class-Contrastive Counterfactual Statements}},
author = {Sokol, Kacper and Flach, Peter A.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {5785-5786},
doi = {10.24963/IJCAI.2018/836},
url = {https://mlanthology.org/ijcai/2018/sokol2018ijcai-conversational/}
}