Ultimatum Bargaining: Algorithms vs. Humans
Abstract
We study human behavior in ultimatum game when interacting with either human or algorithmic opponents. We examine how the type of the AI algorithm (mimicking human behavior, optimising gains, or providing no explanation) and the presence of a human beneficiary affect sending and accepting behaviors. Our experimental data reveal that subjects generally do not differentiate between human and algorithmic opponents, between different algorithms, and between an explained and unexplained algorithm. However, they are more willing to forgo higher payoffs when the algorithm’s earnings benefit a human.
Cite
Text
Ozkes et al. "Ultimatum Bargaining: Algorithms vs. Humans." NeurIPS 2024 Workshops: Behavioral_ML, 2024.Markdown
[Ozkes et al. "Ultimatum Bargaining: Algorithms vs. Humans." NeurIPS 2024 Workshops: Behavioral_ML, 2024.](https://mlanthology.org/neuripsw/2024/ozkes2024neuripsw-ultimatum/)BibTeX
@inproceedings{ozkes2024neuripsw-ultimatum,
title = {{Ultimatum Bargaining: Algorithms vs. Humans}},
author = {Ozkes, Ali and Hanaki, Nobuyuki and Vanderelst, Dieter and Willems, Jurgen},
booktitle = {NeurIPS 2024 Workshops: Behavioral_ML},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/ozkes2024neuripsw-ultimatum/}
}