The Limits of Maxing, Ranking, and Preference Learning
Abstract
We present a comprehensive understanding of three important problems in PAC preference learning: maximum selection (maxing), ranking, and estimating all pairwise preference probabilities, in the adaptive setting. With just Weak Stochastic Transitivity, we show that maxing requires $\Omega(n^2)$ comparisons and with slightly more restrictive Medium Stochastic Transitivity, we present a linear complexity maxing algorithm. With Strong Stochastic Transitivity and Stochastic Triangle Inequality, we derive a ranking algorithm with optimal $\mathcal{O}(n\log n)$ complexity and an optimal algorithm that estimates all pairwise preference probabilities.
Cite
Text
Falahatgar et al. "The Limits of Maxing, Ranking, and Preference Learning." International Conference on Machine Learning, 2018.Markdown
[Falahatgar et al. "The Limits of Maxing, Ranking, and Preference Learning." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/falahatgar2018icml-limits/)BibTeX
@inproceedings{falahatgar2018icml-limits,
title = {{The Limits of Maxing, Ranking, and Preference Learning}},
author = {Falahatgar, Moein and Jain, Ayush and Orlitsky, Alon and Pichapati, Venkatadheeraj and Ravindrakumar, Vaishakh},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {1427-1436},
volume = {80},
url = {https://mlanthology.org/icml/2018/falahatgar2018icml-limits/}
}