Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments

Abstract

We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a subset of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocating funding to scientists, and selecting publications for conferences. The fundamental challenge when applying crowdsourcing in these settings is that agents may misreport their reviews of others to increase their chances of being selected. We propose a new strategyproof (impartial) mechanism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature.

Cite

Text

Aziz et al. "Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments." AAAI Conference on Artificial Intelligence, 2016. doi:10.1609/AAAI.V30I1.10038

Markdown

[Aziz et al. "Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments." AAAI Conference on Artificial Intelligence, 2016.](https://mlanthology.org/aaai/2016/aziz2016aaai-strategyproof/) doi:10.1609/AAAI.V30I1.10038

BibTeX

@inproceedings{aziz2016aaai-strategyproof,
  title     = {{Strategyproof Peer Selection: Mechanisms, Analyses, and Experiments}},
  author    = {Aziz, Haris and Lev, Omer and Mattei, Nicholas and Rosenschein, Jeffrey S. and Walsh, Toby},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2016},
  pages     = {397-403},
  doi       = {10.1609/AAAI.V30I1.10038},
  url       = {https://mlanthology.org/aaai/2016/aziz2016aaai-strategyproof/}
}