Removing Bias and Incentivizing Precision in Peer-Grading
Abstract
Most peer-evaluation practices rely on the evaluator’s goodwill and model them as potentially noisy evaluators. But what if graders are competitive, i.e., enjoy higher utility when their peers get lower scores? We model the setting as a multi-agent incentive design problem and propose a new mechanism, PEQA, that incentivizes these agents (peer-graders) through a score-assignment rule and a grading performance score. PEQA is designed in such a way that it makes grader-bias irrelevant and ensures grader-utility to be monotonically increasing with the grading-precision, despite competitiveness. When grading is costly and costs are private information of the individual graders, a modified version of PEQA implements the socially optimal grading-choices in equilibrium. Data from our classroom experiments is consistent with our theoretical assumptions and show that PEQA outperforms the popular median mechanism, which is used in several massive open online courses (MOOCs).
Cite
Text
Chakraborty et al. "Removing Bias and Incentivizing Precision in Peer-Grading." Journal of Artificial Intelligence Research, 2024. doi:10.1613/JAIR.1.15329Markdown
[Chakraborty et al. "Removing Bias and Incentivizing Precision in Peer-Grading." Journal of Artificial Intelligence Research, 2024.](https://mlanthology.org/jair/2024/chakraborty2024jair-removing/) doi:10.1613/JAIR.1.15329BibTeX
@article{chakraborty2024jair-removing,
title = {{Removing Bias and Incentivizing Precision in Peer-Grading}},
author = {Chakraborty, Anujit and Jindal, Jatin and Nath, Swaprava},
journal = {Journal of Artificial Intelligence Research},
year = {2024},
pages = {1001-1046},
doi = {10.1613/JAIR.1.15329},
volume = {79},
url = {https://mlanthology.org/jair/2024/chakraborty2024jair-removing/}
}