Strategyproof Classification with Shared Inputs
Abstract
Strategyproof classification deals with a setting where a decision-maker must classify a set of input points with binary labels, while minimizing the expected error. The labels of the input points are reported by self-interested agents, who might lie in order to obtain a classifier that more closely matches their own labels, thus creating a bias in the data; this motivates the design of truthful mechanisms that discourage false reports. Previous work [Meir et al., 2008] investigated both decision-theoretic and learning-theoretic variations of the setting, but only considered classifiers that belong to a degenerate class. In this paper we assume that the agents are interested in a shared set of input points. We show that this plausible assumption leads to powerful results. In particular, we demonstrate that variations of a truthful random dictator mechanism can guarantee approximately optimal outcomes with respect to any class of classifiers. Reshef Meir, Ariel D. Procaccia, Jeffrey S. Rosenschein
Cite
Text
Meir et al. "Strategyproof Classification with Shared Inputs." International Joint Conference on Artificial Intelligence, 2009.Markdown
[Meir et al. "Strategyproof Classification with Shared Inputs." International Joint Conference on Artificial Intelligence, 2009.](https://mlanthology.org/ijcai/2009/meir2009ijcai-strategyproof/)BibTeX
@inproceedings{meir2009ijcai-strategyproof,
title = {{Strategyproof Classification with Shared Inputs}},
author = {Meir, Reshef and Procaccia, Ariel D. and Rosenschein, Jeffrey S.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2009},
pages = {220-225},
url = {https://mlanthology.org/ijcai/2009/meir2009ijcai-strategyproof/}
}