Building Human-Machine Trust via Interpretability
Abstract
Developing human-machine trust is a prerequisite for adoption of machine learning systems in decision critical settings (e.g healthcare and governance). Users develop appropriate trust in these systems when they understand how the systems make their decisions. Interpretability not only helps users understand what a system learns but also helps users contest that system to align with their intuition. We propose an algorithm, AVA: Aggregate Valuation of Antecedents, that generates a consensus feature attribution, retrieving local explanations and capturing global patterns learned by a model. Our empirical results show that AVA rivals current benchmarks.
Cite
Text
Bhatt et al. "Building Human-Machine Trust via Interpretability." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33019919Markdown
[Bhatt et al. "Building Human-Machine Trust via Interpretability." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/bhatt2019aaai-building/) doi:10.1609/AAAI.V33I01.33019919BibTeX
@inproceedings{bhatt2019aaai-building,
title = {{Building Human-Machine Trust via Interpretability}},
author = {Bhatt, Umang and Ravikumar, Pradeep and Moura, José M. F.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2019},
pages = {9919-9920},
doi = {10.1609/AAAI.V33I01.33019919},
url = {https://mlanthology.org/aaai/2019/bhatt2019aaai-building/}
}