Incentivising Monitoring in Open Normative Systems
Abstract
We present an approach to incentivising monitoring for norm violations in open multi-agent systems such as Wikipedia. In such systems, there is no crisp definition of a norm violation; rather, it is a matter of judgement whether an agent's behaviour conforms to generally accepted standards of behaviour. Agents may legitimately disagree about borderline cases. Using ideas from scrip systems and peer prediction, we show how to design a mechanism that incentivises agents to monitor each other's behaviour for norm violations. The mechanism keeps the probability of undetected violations (submissions that the majority of the community would consider not conforming to standards) low, and is robust against collusion by the monitoring agents.
Cite
Text
Alechina et al. "Incentivising Monitoring in Open Normative Systems." AAAI Conference on Artificial Intelligence, 2017. doi:10.1609/AAAI.V31I1.10610Markdown
[Alechina et al. "Incentivising Monitoring in Open Normative Systems." AAAI Conference on Artificial Intelligence, 2017.](https://mlanthology.org/aaai/2017/alechina2017aaai-incentivising/) doi:10.1609/AAAI.V31I1.10610BibTeX
@inproceedings{alechina2017aaai-incentivising,
title = {{Incentivising Monitoring in Open Normative Systems}},
author = {Alechina, Natasha and Halpern, Joseph Y. and Kash, Ian A. and Logan, Brian},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2017},
pages = {305-311},
doi = {10.1609/AAAI.V31I1.10610},
url = {https://mlanthology.org/aaai/2017/alechina2017aaai-incentivising/}
}