Boosted Multi-Task Learning

Abstract

In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing their commonalities through shared parameters and their differences with task-specific ones. This enables implicit data sharing and regularization. Our algorithm is derived using the relationship between ℓ _1-regularization and boosting. We evaluate our learning method on web-search ranking data sets from several countries. Here, multi-task learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Further, the proposed method obtains state-of-the-art results on a publicly available multi-task dataset. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.

Cite

Text

Chapelle et al. "Boosted Multi-Task Learning." Machine Learning, 2011. doi:10.1007/S10994-010-5231-6

Markdown

[Chapelle et al. "Boosted Multi-Task Learning." Machine Learning, 2011.](https://mlanthology.org/mlj/2011/chapelle2011mlj-boosted/) doi:10.1007/S10994-010-5231-6

BibTeX

@article{chapelle2011mlj-boosted,
  title     = {{Boosted Multi-Task Learning}},
  author    = {Chapelle, Olivier and Shivaswamy, Pannagadatta K. and Vadrevu, Srinivas and Weinberger, Kilian Q. and Zhang, Ya and Tseng, Belle L.},
  journal   = {Machine Learning},
  year      = {2011},
  pages     = {149-173},
  doi       = {10.1007/S10994-010-5231-6},
  volume    = {85},
  url       = {https://mlanthology.org/mlj/2011/chapelle2011mlj-boosted/}
}