A Learning Theory of Ranking Aggregation

Abstract

Originally formulated in Social Choice theory, Ranking Aggregation, also referred to as Consensus Ranking, has motivated the development of numerous statistical models since the middle of the 20th century. Recently, the analysis of ranking/preference data has been the subject of a renewed interest in machine-learning, boosted by modern applications such as meta-search engines, giving rise to the design of various scalable algorithmic approaches for approximately computing ranking medians, viewed as solutions of a discrete (generally NP-hard) minimization problem. This paper develops a statistical learning theory for ranking aggregation in a general probabilistic setting (avoiding any rigid ranking model assumptions), assessing the generalization ability of empirical ranking medians. Universal rate bounds are established and the situations where convergence occurs at an exponential rate are fully characterized. Minimax lower bounds are also proved, showing that the rate bounds we obtain are optimal.

Cite

Text

Korba et al. "A Learning Theory of Ranking Aggregation." International Conference on Artificial Intelligence and Statistics, 2017.

Markdown

[Korba et al. "A Learning Theory of Ranking Aggregation." International Conference on Artificial Intelligence and Statistics, 2017.](https://mlanthology.org/aistats/2017/korba2017aistats-learning/)

BibTeX

@inproceedings{korba2017aistats-learning,
  title     = {{A Learning Theory of Ranking Aggregation}},
  author    = {Korba, Anna and Clémençon, Stéphan and Sibony, Eric},
  booktitle = {International Conference on Artificial Intelligence and Statistics},
  year      = {2017},
  pages     = {1001-1010},
  url       = {https://mlanthology.org/aistats/2017/korba2017aistats-learning/}
}