Transductive Conformal Inference for Full Ranking
Abstract
We introduce a method based on Conformal Prediction (CP) to quantify the uncertainty of full ranking algorithms. We focus on a specific scenario where $n+m$ items are to be ranked by some ``black box'' algorithm. It is assumed that the relative (ground truth) ranking of $n$ of them is known. The objective is then to quantify the error made by the algorithm on the ranks of the $m$ new items among the total $(n+m)$. In such a setting, the true ranks of the $n$ original items in the total $(n+m)$ depend on the (unknown) true ranks of the $m$ new ones. Consequently, we have no direct access to a calibration set to apply a classical CP method. To address this challenge, we propose to construct distribution-free bounds of the unknown conformity scores using recent results on the distribution of conformal p-values. Using these scores upper bounds, we provide valid prediction sets for the rank of any item. We also control the false coverage proportion, a crucial quantity when dealing with multiple prediction sets. Finally, we empirically show on both synthetic and real data the efficiency of our CP method for state-of-the-art ranking algorithms such as RankNet or LambdaMart.
Cite
Text
Fermanian et al. "Transductive Conformal Inference for Full Ranking." Advances in Neural Information Processing Systems, 2025.Markdown
[Fermanian et al. "Transductive Conformal Inference for Full Ranking." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/fermanian2025neurips-transductive/)BibTeX
@inproceedings{fermanian2025neurips-transductive,
title = {{Transductive Conformal Inference for Full Ranking}},
author = {Fermanian, Jean-Baptiste and Humbert, Pierre and Blanchard, Gilles},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/fermanian2025neurips-transductive/}
}