Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests

Abstract

We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.

Cite

Text

Gao et al. "Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests." AAAI Conference on Artificial Intelligence, 2012. doi:10.1609/AAAI.V26I1.8098

Markdown

[Gao et al. "Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests." AAAI Conference on Artificial Intelligence, 2012.](https://mlanthology.org/aaai/2012/gao2012aaai-quality/) doi:10.1609/AAAI.V26I1.8098

BibTeX

@inproceedings{gao2012aaai-quality,
  title     = {{Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests}},
  author    = {Gao, Xi Alice and Bachrach, Yoram and Key, Peter B. and Graepel, Thore},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2012},
  pages     = {38-44},
  doi       = {10.1609/AAAI.V26I1.8098},
  url       = {https://mlanthology.org/aaai/2012/gao2012aaai-quality/}
}