Ranking with Abstention
Abstract
We introduce a novel framework of *ranking with abstention*, where the learner can abstain from making prediction at some limited cost $c$. We present a extensive theoretical analysis of this framework including a series of *$H$-consistency bounds* for both the family of linear functions and that of neural networks with one hidden-layer. These theoretical guarantees are the state-of-the-art consistency guarantees in the literature, which are upper bounds on the target loss estimation error of a predictor in a hypothesis set $H$, expressed in terms of the surrogate loss estimation error of that predictor. We further argue that our proposed abstention methods are important when using common equicontinuous hypothesis sets in practice. We report the results of experiments illustrating the effectiveness of ranking with abstention.
Cite
Text
Mao et al. "Ranking with Abstention." ICML 2023 Workshops: MFPL, 2023.Markdown
[Mao et al. "Ranking with Abstention." ICML 2023 Workshops: MFPL, 2023.](https://mlanthology.org/icmlw/2023/mao2023icmlw-ranking/)BibTeX
@inproceedings{mao2023icmlw-ranking,
title = {{Ranking with Abstention}},
author = {Mao, Anqi and Mohri, Mehryar and Zhong, Yutao},
booktitle = {ICML 2023 Workshops: MFPL},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/mao2023icmlw-ranking/}
}