Pretrained Deep Models Outperform GBDTs in Learning-to-Rank Under Label Scarcity
Abstract
On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data. However, these works often study problem settings which may not fully capture the complexities of real-world scenarios. We identify a natural tabular data setting where DL models can outperform GBDTs: tabular Learning-to-Rank (LTR) under label scarcity. Tabular LTR applications, including search and recommendation, often have an abundance of unlabeled data, and scarce labeled data. We show that DL rankers can utilize unsupervised pretraining to exploit this unlabeled data. In extensive experiments over both public and proprietary datasets, we show that pretrained DL rankers consistently outperform GBDT rankers on ranking metrics, sometimes by as much as 38%, both overall and on outliers.
Cite
Text
Hou et al. "Pretrained Deep Models Outperform GBDTs in Learning-to-Rank Under Label Scarcity." Transactions on Machine Learning Research, 2024.Markdown
[Hou et al. "Pretrained Deep Models Outperform GBDTs in Learning-to-Rank Under Label Scarcity." Transactions on Machine Learning Research, 2024.](https://mlanthology.org/tmlr/2024/hou2024tmlr-pretrained/)BibTeX
@article{hou2024tmlr-pretrained,
title = {{Pretrained Deep Models Outperform GBDTs in Learning-to-Rank Under Label Scarcity}},
author = {Hou, Charlie and Thekumparampil, Kiran Koshy and Shavlovsky, Michael and Fanti, Giulia and Dattatreya, Yesh and Sanghavi, Sujay},
journal = {Transactions on Machine Learning Research},
year = {2024},
url = {https://mlanthology.org/tmlr/2024/hou2024tmlr-pretrained/}
}