Active Ranking Without Strong Stochastic Transitivity

Abstract

Ranking from noisy comparisons is of great practical interest in machine learning. In this paper, we consider the problem of recovering the exact full ranking for a list of items under ranking models that do *not* assume the Strong Stochastic Transitivity property. We propose a $\delta$-correct algorithm, Probe-Rank, that actively learns the ranking of the items from noisy pairwise comparisons. We prove a sample complexity upper bound for Probe-Rank, which only depends on the preference probabilities between items that are adjacent in the true ranking. This improves upon existing sample complexity results that depend on the preference probabilities for all pairs of items. Probe-Rank thus outperforms existing methods over a large collection of instances that do not satisfy Strong Stochastic Transitivity. Thorough numerical experiments in various settings are conducted, demonstrating that Probe-Rank is significantly more sample-efficient than the state-of-the-art active ranking method.

Cite

Text

Lou et al. "Active Ranking Without Strong Stochastic Transitivity." Neural Information Processing Systems, 2022.

Markdown

[Lou et al. "Active Ranking Without Strong Stochastic Transitivity." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/lou2022neurips-active/)

BibTeX

@inproceedings{lou2022neurips-active,
  title     = {{Active Ranking Without Strong Stochastic Transitivity}},
  author    = {Lou, Hao and Jin, Tao and Wu, Yue and Xu, Pan and Gu, Quanquan and Farnoud, Farzad},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/lou2022neurips-active/}
}