Explaining Reinforcement Learning to Mere Mortals: An Empirical Study

Abstract

We present a user study to investigate the impact of explanations on non-experts? understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent explanation type, reward-decomposition bars (predictions of future types of rewards). We designed a 124 participant, four-treatment experiment to compare participants? mental models of an RL agent in a simple Real-Time Strategy (RTS) game. Our results show that the combination of both saliency and reward bars were needed to achieve a statistically significant improvement in mental model score over the control. In addition, our qualitative analysis of the data reveals a number of effects for further study.

Cite

Text

Anderson et al. "Explaining Reinforcement Learning to Mere Mortals: An Empirical Study." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/184

Markdown

[Anderson et al. "Explaining Reinforcement Learning to Mere Mortals: An Empirical Study." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/anderson2019ijcai-explaining/) doi:10.24963/IJCAI.2019/184

BibTeX

@inproceedings{anderson2019ijcai-explaining,
  title     = {{Explaining Reinforcement Learning to Mere Mortals: An Empirical Study}},
  author    = {Anderson, Andrew and Dodge, Jonathan and Sadarangani, Amrita and Juozapaitis, Zoe and Newman, Evan and Irvine, Jed and Chattopadhyay, Souti and Fern, Alan and Burnett, Margaret},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {1328-1334},
  doi       = {10.24963/IJCAI.2019/184},
  url       = {https://mlanthology.org/ijcai/2019/anderson2019ijcai-explaining/}
}