New Algorithms for Budgeted Learning
Abstract
We explore the problem of budgeted machine learning, in which the learning algorithm has free access to the training examples’ class labels but has to pay for each attribute that is specified. This learning model is appropriate in many areas, including medical applications. We present new algorithms for choosing which attributes to purchase of which examples, based on algorithms for the multi-armed bandit problem. In addition, we also evaluate a group of algorithms based on the idea of incorporating second-order statistics into decision making. Most of our algorithms are competitive with the current state of art and performed better when the budget was highly limited (in particular, our new algorithm AbsoluteBR2). Finally, we present new heuristics for selecting an instance to purchase after the attribute is selected, instead of selecting an instance uniformly at random, which is typically done. While experimental results showed some performance improvements when using the new instance selectors, there was no consistent winner among these methods.
Cite
Text
Deng et al. "New Algorithms for Budgeted Learning." Machine Learning, 2013. doi:10.1007/S10994-012-5299-2Markdown
[Deng et al. "New Algorithms for Budgeted Learning." Machine Learning, 2013.](https://mlanthology.org/mlj/2013/deng2013mlj-new/) doi:10.1007/S10994-012-5299-2BibTeX
@article{deng2013mlj-new,
title = {{New Algorithms for Budgeted Learning}},
author = {Deng, Kun and Zheng, Yaling and Bourke, Chris and Scott, Stephen and Masciale, Julie},
journal = {Machine Learning},
year = {2013},
pages = {59-90},
doi = {10.1007/S10994-012-5299-2},
volume = {90},
url = {https://mlanthology.org/mlj/2013/deng2013mlj-new/}
}