On the Consistency of Ranking Algorithms

Abstract

We present a theoretical analysis of supervised ranking, providing necessary and sufficient conditions for the asymptotic consistency of algorithms based on minimizing a surrogate loss function. We show that many commonly used surrogate losses are inconsistent; surprisingly, we show inconsistency even in low-noise settings. We present a new value-regularized linear loss, establish its consistency under reasonable assumptions on noise, and show that it outperforms conventional ranking losses in a collaborative filtering experiment.

Cite

Text

Duchi et al. "On the Consistency of Ranking Algorithms." International Conference on Machine Learning, 2010.

Markdown

[Duchi et al. "On the Consistency of Ranking Algorithms." International Conference on Machine Learning, 2010.](https://mlanthology.org/icml/2010/duchi2010icml-consistency/)

BibTeX

@inproceedings{duchi2010icml-consistency,
  title     = {{On the Consistency of Ranking Algorithms}},
  author    = {Duchi, John C. and Mackey, Lester W. and Jordan, Michael I.},
  booktitle = {International Conference on Machine Learning},
  year      = {2010},
  pages     = {327-334},
  url       = {https://mlanthology.org/icml/2010/duchi2010icml-consistency/}
}