Monitoring Human Dependence on AI Systems with Reliance Drills

Abstract

AI systems are assisting humans with increasingly diverse intellectual tasks but are still prone to mistakes. Humans are over-reliant on this assistance if they trust AI-generated advice, even though they would make a better decision on their own. To identify such instances of over-reliance, this paper proposes the reliance drill: an exercise that tests whether a human can recognise mistakes in AI-generated advice. Our paper examines the reasons why an organisation might choose to implement reliance drills and the doubts they may have about doing so. As an example, we consider the benefits and risks that could arise when using these drills to detect over-reliance on AI in healthcare professionals. We conclude by arguing that reliance drills should become a standard risk management practice for ensuring humans remain appropriately involved in the oversight of AI-assisted decisions.

Cite

Text

Hunter et al. "Monitoring Human Dependence on AI Systems with Reliance Drills." NeurIPS 2024 Workshops: SoLaR, 2024.

Markdown

[Hunter et al. "Monitoring Human Dependence on AI Systems with Reliance Drills." NeurIPS 2024 Workshops: SoLaR, 2024.](https://mlanthology.org/neuripsw/2024/hunter2024neuripsw-monitoring/)

BibTeX

@inproceedings{hunter2024neuripsw-monitoring,
  title     = {{Monitoring Human Dependence on AI Systems with Reliance Drills}},
  author    = {Hunter, Rosco and Moulange, Richard and Bernardi, Jamie and Stein, Merlin},
  booktitle = {NeurIPS 2024 Workshops: SoLaR},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/hunter2024neuripsw-monitoring/}
}