Meta–Gradient Boosted Decision Tree Model for Weight and Target Learning

Abstract

Labeled training data is an essential part of any supervised machine learning framework. In practice, there is a trade-off between the quality of a label and its cost. In this paper, we consider a problem of learning to rank on a large-scale dataset with low-quality relevance labels aiming at maximizing the quality of a trained ranker on a small validation dataset with high-quality ground truth relevance labels. Motivated by the classical Gauss-Markov theorem for the linear regression problem, we formulate the problems of (1) reweighting training instances and (2) remapping learning targets. We propose meta–gradient decision tree learning framework for optimizing weight and target functions by applying gradient-based hyperparameter optimization. Experiments on a large-scale real-world dataset demonstrate that we can significantly improve state-of-the-art machine-learning algorithms by incorporating our framework.

Cite

Text

Ustinovskiy et al. "Meta–Gradient Boosted Decision Tree Model for Weight and Target Learning." International Conference on Machine Learning, 2016.

Markdown

[Ustinovskiy et al. "Meta–Gradient Boosted Decision Tree Model for Weight and Target Learning." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/ustinovskiy2016icml-metagradient/)

BibTeX

@inproceedings{ustinovskiy2016icml-metagradient,
  title     = {{Meta–Gradient Boosted Decision Tree Model for Weight and Target Learning}},
  author    = {Ustinovskiy, Yury and Fedorova, Valentina and Gusev, Gleb and Serdyukov, Pavel},
  booktitle = {International Conference on Machine Learning},
  year      = {2016},
  pages     = {2692-2701},
  volume    = {48},
  url       = {https://mlanthology.org/icml/2016/ustinovskiy2016icml-metagradient/}
}