Bonus or Not? Learn to Reward in Crowdsourcing

Abstract

Recent work has shown that the quality of work produced in a crowdsourcing working session can be influenced by the presence of performance-contingent financial incentives, such as bonuses for exceptional performance, in the session. We take an algorithmic approach to decide when to offer bonuses in a working session to improve the overall utility that a requester derives from the session. Specifically, we propose and train an input-output hidden Markov model to learn the impact of bonuses on work quality and then use this model to dynamically decide whether to offer a bonus on each task in a working session to maximize a requester's utility. Experiments on Amazon Mechanical Turk show that our approach leads to higher utility for the requester than fixed and random bonus schemes do. Simulations on synthesized data sets further demonstrate the robustness of our approach against different worker population and worker behavior in improving requester utility.

Cite

Text

Yin and Chen. "Bonus or Not? Learn to Reward in Crowdsourcing." International Joint Conference on Artificial Intelligence, 2015.

Markdown

[Yin and Chen. "Bonus or Not? Learn to Reward in Crowdsourcing." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/yin2015ijcai-bonus/)

BibTeX

@inproceedings{yin2015ijcai-bonus,
  title     = {{Bonus or Not? Learn to Reward in Crowdsourcing}},
  author    = {Yin, Ming and Chen, Yiling},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {201-208},
  url       = {https://mlanthology.org/ijcai/2015/yin2015ijcai-bonus/}
}