Minimax Bounds for Active Learning
Abstract
This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore we show that the learning rates derived are tight for “boundary fragment” classes in d -dimensional feature spaces when the feature marginal density is bounded from above and below.
Cite
Text
Castro and Nowak. "Minimax Bounds for Active Learning." Annual Conference on Computational Learning Theory, 2007. doi:10.1007/978-3-540-72927-3_3Markdown
[Castro and Nowak. "Minimax Bounds for Active Learning." Annual Conference on Computational Learning Theory, 2007.](https://mlanthology.org/colt/2007/castro2007colt-minimax/) doi:10.1007/978-3-540-72927-3_3BibTeX
@inproceedings{castro2007colt-minimax,
title = {{Minimax Bounds for Active Learning}},
author = {Castro, Rui M. and Nowak, Robert D.},
booktitle = {Annual Conference on Computational Learning Theory},
year = {2007},
pages = {5-19},
doi = {10.1007/978-3-540-72927-3_3},
url = {https://mlanthology.org/colt/2007/castro2007colt-minimax/}
}