Selective Sampling with Redundant Views
Abstract
Selective sampling, a form of active learning, reduces the cost of labeling training data by asking only for the labels of the most informative unlabeled examples. We introduce a novel approach to selective sampling which we call co-testing. Cotesting can be applied to problems with redundant views (i.e., problems with multiple disjoint sets of attributes that can be used for learning). We analyze the most general algorithm in the co-testing family, naive co-testing, which can be used with virtually any type of learner. Naive co-testing simply selects at random an example on which the existing views disagree. We applied our algorithm to a variety of domains, including three real-world problems: wrapper induction, Web page classification, and discourse trees parsing. The empirical results show that besides reducing the number of labeled examples, naive co-testing may also boost the classification accuracy. Introduction In order to learn a classifier, supervised learn...
Cite
Text
Muslea et al. "Selective Sampling with Redundant Views." AAAI Conference on Artificial Intelligence, 2000.Markdown
[Muslea et al. "Selective Sampling with Redundant Views." AAAI Conference on Artificial Intelligence, 2000.](https://mlanthology.org/aaai/2000/muslea2000aaai-selective/)BibTeX
@inproceedings{muslea2000aaai-selective,
title = {{Selective Sampling with Redundant Views}},
author = {Muslea, Ion and Minton, Steven and Knoblock, Craig A.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2000},
pages = {621-626},
url = {https://mlanthology.org/aaai/2000/muslea2000aaai-selective/}
}