Position: AI Agents & Liability – Mapping Insights from ML and HCI Research to Policy

Abstract

AI agents are loosely defined as systems capable of executing complex, open-ended tasks. Many have raised concerns that these systems will present significant challenges to regulatory/legal frameworks, particularly in tort liability. However, as there is no universally accepted definition of an AI agent, concrete analyses of these challenges are limited, especially as AI systems continue to grow in capabilities. In this paper, we argue that by focusing on properties of AI agents rather than the threshold at which an AI system becomes an agent, we can map existing technical research to explicit categories of “foreseeable harms” in tort liability, as well as point to “reasonable actions” that developers can take to mitigate harms.

Cite

Text

Dunlop et al. "Position: AI Agents & Liability – Mapping Insights from ML and HCI Research to Policy." NeurIPS 2024 Workshops: SoLaR, 2024.

Markdown

[Dunlop et al. "Position: AI Agents & Liability – Mapping Insights from ML and HCI Research to Policy." NeurIPS 2024 Workshops: SoLaR, 2024.](https://mlanthology.org/neuripsw/2024/dunlop2024neuripsw-position/)

BibTeX

@inproceedings{dunlop2024neuripsw-position,
  title     = {{Position: AI Agents & Liability – Mapping Insights from ML and HCI Research to Policy}},
  author    = {Dunlop, Connor and Pan, Weiwei and Smakman, Julia and Soder, Lisa and Swaroop, Siddharth},
  booktitle = {NeurIPS 2024 Workshops: SoLaR},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/dunlop2024neuripsw-position/}
}