On Discarding, Caching, and Recalling Samples in Active Learning
Abstract
We address challenges of active learning under scarce informational resources in non-stationary environments. In real-world settings, data labeled and integrated into a predictive model may become invalid over time. However, the data can become informative again with switches in context and such changes may indicate unmodeled cyclic or other temporal dynamics. We explore principles for discarding, caching, and recalling labeled data points in active learning based on computations of value of information. We review key concepts and study the value of the methods via investigations of predictive performance and costs of acquiring data for simulated and real-world data sets.
Cite
Text
Kapoor and Horvitz. "On Discarding, Caching, and Recalling Samples in Active Learning." Conference on Uncertainty in Artificial Intelligence, 2007. doi:10.5555/3020488.3020514Markdown
[Kapoor and Horvitz. "On Discarding, Caching, and Recalling Samples in Active Learning." Conference on Uncertainty in Artificial Intelligence, 2007.](https://mlanthology.org/uai/2007/kapoor2007uai-discarding/) doi:10.5555/3020488.3020514BibTeX
@inproceedings{kapoor2007uai-discarding,
title = {{On Discarding, Caching, and Recalling Samples in Active Learning}},
author = {Kapoor, Ashish and Horvitz, Eric},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2007},
pages = {209-216},
doi = {10.5555/3020488.3020514},
url = {https://mlanthology.org/uai/2007/kapoor2007uai-discarding/}
}