Who Should Predict? Exact Algorithms for Learning to Defer to Humans
Abstract
Automated AI classifiers should be able to defer the prediction to a human decision maker to ensure more accurate predictions. In this work, we jointly train a classifier with a rejector, which decides on each data point whether the classifier or the human should predict. We show that prior approaches can fail to find a human-AI system with low mis-classification error even when there exists a linear classifier and rejector that have zero error (the realizable setting). We prove that obtaining a linear pair with low error is NP-hard even when the problem is realizable. To complement this negative result, we give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting. However, the MILP only scales to moderately-sized problems. Therefore, we provide a novel surrogate loss function that is realizable-consistent and performs well empirically. We test our approaches on a comprehensive set of datasets and compare to a wide range of baselines.
Cite
Text
Mozannar et al. "Who Should Predict? Exact Algorithms for Learning to Defer to Humans." Artificial Intelligence and Statistics, 2023.Markdown
[Mozannar et al. "Who Should Predict? Exact Algorithms for Learning to Defer to Humans." Artificial Intelligence and Statistics, 2023.](https://mlanthology.org/aistats/2023/mozannar2023aistats-predict/)BibTeX
@inproceedings{mozannar2023aistats-predict,
title = {{Who Should Predict? Exact Algorithms for Learning to Defer to Humans}},
author = {Mozannar, Hussein and Lang, Hunter and Wei, Dennis and Sattigeri, Prasanna and Das, Subhro and Sontag, David},
booktitle = {Artificial Intelligence and Statistics},
year = {2023},
pages = {10520-10545},
volume = {206},
url = {https://mlanthology.org/aistats/2023/mozannar2023aistats-predict/}
}