Learning and Classifying Under Hard Budgets
Abstract
Since resources for data acquisition are seldom infinite, both learners and classifiers must act intelligently under hard budgets. In this paper, we consider problems in which feature values are unknown to both the learner and classifier, but can be acquired at a cost. Our goal is a learner that spends its fixed learning budget b _ L acquiring training data, to produce the most accurate “active classifier” that spends at most b _ C per instance. To produce this fixed-budget classifier, the fixed-budget learner must sequentially decide which feature values to collect to learn the relevant information about the distribution. We explore several approaches the learner can take, including the standard “round robin” policy (purchasing every feature of every instance until the b _ L budget is exhausted). We demonstrate empirically that round robin is problematic (especially for small b _ L ), and provide alternate learning strategies that achieve superior performance on a variety of datasets.
Cite
Text
Kapoor and Greiner. "Learning and Classifying Under Hard Budgets." European Conference on Machine Learning, 2005. doi:10.1007/11564096_20Markdown
[Kapoor and Greiner. "Learning and Classifying Under Hard Budgets." European Conference on Machine Learning, 2005.](https://mlanthology.org/ecmlpkdd/2005/kapoor2005ecml-learning/) doi:10.1007/11564096_20BibTeX
@inproceedings{kapoor2005ecml-learning,
title = {{Learning and Classifying Under Hard Budgets}},
author = {Kapoor, Aloak and Greiner, Russell},
booktitle = {European Conference on Machine Learning},
year = {2005},
pages = {170-181},
doi = {10.1007/11564096_20},
url = {https://mlanthology.org/ecmlpkdd/2005/kapoor2005ecml-learning/}
}