Which Tricks Are Important for Learning to Rank?

Abstract

Nowadays, state-of-the-art learning-to-rank methods are based on gradient-boosted decision trees (GBDT). The most well-known algorithm is LambdaMART which was proposed more than a decade ago. Recently, several other GBDT-based ranking algorithms were proposed. In this paper, we thoroughly analyze these methods in a unified setup. In particular, we address the following questions. Is direct optimization of a smoothed ranking loss preferable over optimizing a convex surrogate? How to properly construct and smooth surrogate ranking losses? To address these questions, we compare LambdaMART with YetiRank and StochasticRank methods and their modifications. We also propose a simple improvement of the YetiRank approach that allows for optimizing specific ranking loss functions. As a result, we gain insights into learning-to-rank techniques and obtain a new state-of-the-art algorithm.

Cite

Text

Lyzhin et al. "Which Tricks Are Important for Learning to Rank?." International Conference on Machine Learning, 2023.

Markdown

[Lyzhin et al. "Which Tricks Are Important for Learning to Rank?." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/lyzhin2023icml-tricks/)

BibTeX

@inproceedings{lyzhin2023icml-tricks,
  title     = {{Which Tricks Are Important for Learning to Rank?}},
  author    = {Lyzhin, Ivan and Ustimenko, Aleksei and Gulin, Andrey and Prokhorenkova, Liudmila},
  booktitle = {International Conference on Machine Learning},
  year      = {2023},
  pages     = {23264-23278},
  volume    = {202},
  url       = {https://mlanthology.org/icml/2023/lyzhin2023icml-tricks/}
}