Supervised Learning from Multiple Experts: Whom to Trust When Everyone Lies a Bit

Abstract

We describe a probabilistic approach for supervised learning when we have multiple experts/annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method clearly beats the commonly used majority voting baseline.

Cite

Text

Raykar et al. "Supervised Learning from Multiple Experts: Whom to Trust When Everyone Lies a Bit." International Conference on Machine Learning, 2009. doi:10.1145/1553374.1553488

Markdown

[Raykar et al. "Supervised Learning from Multiple Experts: Whom to Trust When Everyone Lies a Bit." International Conference on Machine Learning, 2009.](https://mlanthology.org/icml/2009/raykar2009icml-supervised/) doi:10.1145/1553374.1553488

BibTeX

@inproceedings{raykar2009icml-supervised,
  title     = {{Supervised Learning from Multiple Experts: Whom to Trust When Everyone Lies a Bit}},
  author    = {Raykar, Vikas C. and Yu, Shipeng and Zhao, Linda H. and Jerebko, Anna K. and Florin, Charles and Valadez, Gerardo Hermosillo and Bogoni, Luca and Moy, Linda},
  booktitle = {International Conference on Machine Learning},
  year      = {2009},
  pages     = {889-896},
  doi       = {10.1145/1553374.1553488},
  url       = {https://mlanthology.org/icml/2009/raykar2009icml-supervised/}
}