Active Learning with Irrelevant Examples

Abstract

Active learning algorithms attempt to accelerate the learning process by requesting labels for the most informative items first. In real-world problems, however, there may exist unlabeled items that are irrelevant to the user’s classification goals. Queries about these points slow down learning because they provide no information about the problem of interest. We have observed that when irrelevant items are present, active learning can perform worse than random selection, requiring more time (queries) to achieve the same level of accuracy. Therefore, we propose a novel approach, Relevance Bias , in which the active learner combines its default selection heuristic with the output of a simultaneously trained relevance classifier to favor items that are likely to be both informative and relevant. In our experiments on a real-world problem and two benchmark datasets, the Relevance Bias approach significantly improves the learning rate of three different active learning approaches.

Cite

Text

Mazzoni et al. "Active Learning with Irrelevant Examples." European Conference on Machine Learning, 2006. doi:10.1007/11871842_69

Markdown

[Mazzoni et al. "Active Learning with Irrelevant Examples." European Conference on Machine Learning, 2006.](https://mlanthology.org/ecmlpkdd/2006/mazzoni2006ecml-active/) doi:10.1007/11871842_69

BibTeX

@inproceedings{mazzoni2006ecml-active,
  title     = {{Active Learning with Irrelevant Examples}},
  author    = {Mazzoni, Dominic and Wagstaff, Kiri and Burl, Michael C.},
  booktitle = {European Conference on Machine Learning},
  year      = {2006},
  pages     = {695-702},
  doi       = {10.1007/11871842_69},
  url       = {https://mlanthology.org/ecmlpkdd/2006/mazzoni2006ecml-active/}
}