The Sample Complexity of Self-Verifying Bayesian Active Learning
Abstract
We prove that access to a prior distribution over target functions can dramatically improve the sample complexity of self-terminating active learning algorithms, so that it is always better than the known results for prior-dependent passive learning. In particular, this is in stark contrast to the analysis of prior-independent algorithms, where there are simple known learning problems for which no self-terminating algorithm can provide this guarantee for all priors.
Cite
Text
Yang et al. "The Sample Complexity of Self-Verifying Bayesian Active Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.Markdown
[Yang et al. "The Sample Complexity of Self-Verifying Bayesian Active Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.](https://mlanthology.org/aistats/2011/yang2011aistats-sample/)BibTeX
@inproceedings{yang2011aistats-sample,
title = {{The Sample Complexity of Self-Verifying Bayesian Active Learning}},
author = {Yang, Liu and Hanneke, Steve and Carbonell, Jaime},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics},
year = {2011},
pages = {816-822},
volume = {15},
url = {https://mlanthology.org/aistats/2011/yang2011aistats-sample/}
}