StochasticRank: Global Optimization of Scale-Free Discrete Functions
Abstract
In this paper, we introduce a powerful and efficient framework for direct optimization of ranking metrics. The problem is ill-posed due to the discrete structure of the loss, and to deal with that, we introduce two important techniques: stochastic smoothing and novel gradient estimate based on partial integration. We show that classic smoothing approaches may introduce bias and present a universal solution for a proper debiasing. Importantly, we can guarantee global convergence of our method by adopting a recently proposed Stochastic Gradient Langevin Boosting algorithm. Our algorithm is implemented as a part of the CatBoost gradient boosting library and outperforms the existing approaches on several learning-to-rank datasets. In addition to ranking metrics, our framework applies to any scale-free discrete loss function.
Cite
Text
Ustimenko and Prokhorenkova. "StochasticRank: Global Optimization of Scale-Free Discrete Functions." International Conference on Machine Learning, 2020.Markdown
[Ustimenko and Prokhorenkova. "StochasticRank: Global Optimization of Scale-Free Discrete Functions." International Conference on Machine Learning, 2020.](https://mlanthology.org/icml/2020/ustimenko2020icml-stochasticrank/)BibTeX
@inproceedings{ustimenko2020icml-stochasticrank,
title = {{StochasticRank: Global Optimization of Scale-Free Discrete Functions}},
author = {Ustimenko, Aleksei and Prokhorenkova, Liudmila},
booktitle = {International Conference on Machine Learning},
year = {2020},
pages = {9669-9679},
volume = {119},
url = {https://mlanthology.org/icml/2020/ustimenko2020icml-stochasticrank/}
}