Ranking with Large Margin Principle: Two Approaches
Abstract
We discuss the problem of ranking k instances with the use of a "large margin" principle. We introduce two main approaches: the first is the "fixed margin" policy in which the margin of the closest neighboring classes is being maximized - which turns out to be a direct generaliza(cid:173) tion of SVM to ranking learning. The second approach allows for k - 1 different margins where the sum of margins is maximized. This approach is shown to reduce to lI-SVM when the number of classes k = 2. Both approaches are optimal in size of 21 where I is the total number of training examples. Experiments performed on visual classification and "collab(cid:173) orative filtering" show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification.
Cite
Text
Shashua and Levin. "Ranking with Large Margin Principle: Two Approaches." Neural Information Processing Systems, 2002.Markdown
[Shashua and Levin. "Ranking with Large Margin Principle: Two Approaches." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/shashua2002neurips-ranking/)BibTeX
@inproceedings{shashua2002neurips-ranking,
title = {{Ranking with Large Margin Principle: Two Approaches}},
author = {Shashua, Amnon and Levin, Anat},
booktitle = {Neural Information Processing Systems},
year = {2002},
pages = {961-968},
url = {https://mlanthology.org/neurips/2002/shashua2002neurips-ranking/}
}