Generalization Error Bounds for Learning to Rank: Does the Length of Document Lists Matter?
Abstract
We consider the generalization ability of algorithms for learning to rank at a query level, a problem also called subset ranking. Existing generalization error bounds necessarily degrade as the size of the document list associated with a query increases. We show that such a degradation is not intrinsic to the problem. For several loss functions, including the cross-entropy loss used in the well known ListNet method, there is no degradation in generalization ability as document lists become longer. We also provide novel generalization error bounds under \ell_1 regularization and faster convergence rates if the loss function is smooth.
Cite
Text
Tewari and Chaudhuri. "Generalization Error Bounds for Learning to Rank: Does the Length of Document Lists Matter?." International Conference on Machine Learning, 2015.Markdown
[Tewari and Chaudhuri. "Generalization Error Bounds for Learning to Rank: Does the Length of Document Lists Matter?." International Conference on Machine Learning, 2015.](https://mlanthology.org/icml/2015/tewari2015icml-generalization/)BibTeX
@inproceedings{tewari2015icml-generalization,
title = {{Generalization Error Bounds for Learning to Rank: Does the Length of Document Lists Matter?}},
author = {Tewari, Ambuj and Chaudhuri, Sougata},
booktitle = {International Conference on Machine Learning},
year = {2015},
pages = {315-323},
volume = {37},
url = {https://mlanthology.org/icml/2015/tewari2015icml-generalization/}
}