On Evaluating Explanation Utility for Human-AI Decision-Making in NLP

Abstract

Is explainability a false promise? This debate has emerged from the lack of consistent evidence that explanations help in situations they are introduced for. In NLP, the evidence is not only inconsistent but also scarce. While there is a clear need for more human-centered, application-grounded evaluations, it is less clear where NLP researchers should begin if they want to conduct them. To address this, we introduce evaluation guidelines established through an extensive review and meta-analysis of related work.

Cite

Text

Chaleshtori et al. "On Evaluating Explanation Utility for Human-AI Decision-Making in NLP." NeurIPS 2023 Workshops: XAIA, 2023.

Markdown

[Chaleshtori et al. "On Evaluating Explanation Utility for Human-AI Decision-Making in NLP." NeurIPS 2023 Workshops: XAIA, 2023.](https://mlanthology.org/neuripsw/2023/chaleshtori2023neuripsw-evaluating/)

BibTeX

@inproceedings{chaleshtori2023neuripsw-evaluating,
  title     = {{On Evaluating Explanation Utility for Human-AI Decision-Making in NLP}},
  author    = {Chaleshtori, Fateme Hashemi and Ghosal, Atreya and Marasovic, Ana},
  booktitle = {NeurIPS 2023 Workshops: XAIA},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/chaleshtori2023neuripsw-evaluating/}
}