Ranking and Scoring Using Empirical Risk Minimization
Abstract
A general model is proposed for studying ranking problems. We investigate learning methods based on empirical minimization of the natural estimates of the ranking risk. The empirical estimates are of the form of a U -statistic. Inequalities from the theory of U -statistics and U -processes are used to obtain performance bounds for the empirical risk minimizers. Convex risk minimization methods are also studied to give a theoretical framework for ranking algorithms based on boosting and support vector machines. Just like in binary classification, fast rates of convergence are achieved under certain noise assumption. General sufficient conditions are proposed in several special cases that guarantee fast rates of convergence.
Cite
Text
Clémençon et al. "Ranking and Scoring Using Empirical Risk Minimization." Annual Conference on Computational Learning Theory, 2005. doi:10.1007/11503415_1Markdown
[Clémençon et al. "Ranking and Scoring Using Empirical Risk Minimization." Annual Conference on Computational Learning Theory, 2005.](https://mlanthology.org/colt/2005/clemencon2005colt-ranking/) doi:10.1007/11503415_1BibTeX
@inproceedings{clemencon2005colt-ranking,
title = {{Ranking and Scoring Using Empirical Risk Minimization}},
author = {Clémençon, Stéphan and Lugosi, Gábor and Vayatis, Nicolas},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2005},
pages = {1-15},
doi = {10.1007/11503415_1},
url = {https://mlanthology.org/colt/2005/clemencon2005colt-ranking/}
}