Training-Time Optimization of a Budgeted Booster

Abstract

We consider the problem of feature-efficient prediction - a setting where features have costs and the learner is limited by a budget constraint on the total cost of the features it can examine in test time. We focus on solving this problem with boosting by optimizing the choice of base learners in the training phase and stopping the boosting process when the learner's budget runs out. We experimentally show that our method improves upon the boosting approach AdaBoostRS [Reyzin, 2011] and in many cases also outperforms the recent algorithm SpeedBoost [Grubb and Bagnell, 2012]. We provide a theoretical justication for our optimization method via the margin bound. We also experimentally show that our method outperforms pruned decision trees, a natural budgeted classifier.

Cite

Text

Huang et al. "Training-Time Optimization of a Budgeted Booster." International Joint Conference on Artificial Intelligence, 2015.

Markdown

[Huang et al. "Training-Time Optimization of a Budgeted Booster." International Joint Conference on Artificial Intelligence, 2015.](https://mlanthology.org/ijcai/2015/huang2015ijcai-training/)

BibTeX

@inproceedings{huang2015ijcai-training,
  title     = {{Training-Time Optimization of a Budgeted Booster}},
  author    = {Huang, Yi and Powers, Brian and Reyzin, Lev},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2015},
  pages     = {3583-3589},
  url       = {https://mlanthology.org/ijcai/2015/huang2015ijcai-training/}
}