Ranking with a P-Norm Push

Abstract

We are interested in supervised ranking with the following twist: our goal is to design algorithms that perform especially well near the top of the ranked list, and are only required to perform sufficiently well on the rest of the list. Towards this goal, we provide a general form of convex objective that gives high-scoring examples more importance. This “push” near the top of the list can be chosen to be arbitrarily large or small. We choose ℓ_ p -norms to provide a specific type of push; as p becomes large, the algorithm concentrates harder near the top of the list. We derive a generalization bound based on the p -norm objective. We then derive a corresponding boosting-style algorithm, and illustrate the usefulness of the algorithm through experiments on UCI data. We prove that the minimizer of the objective is unique in a specific sense.

Cite

Text

Rudin. "Ranking with a P-Norm Push." Annual Conference on Computational Learning Theory, 2006. doi:10.1007/11776420_43

Markdown

[Rudin. "Ranking with a P-Norm Push." Annual Conference on Computational Learning Theory, 2006.](https://mlanthology.org/colt/2006/rudin2006colt-ranking/) doi:10.1007/11776420_43

BibTeX

@inproceedings{rudin2006colt-ranking,
  title     = {{Ranking with a P-Norm Push}},
  author    = {Rudin, Cynthia},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {2006},
  pages     = {589-604},
  doi       = {10.1007/11776420_43},
  url       = {https://mlanthology.org/colt/2006/rudin2006colt-ranking/}
}