Agnostic Active Learning

Abstract

We state and analyze the first active learning algorithm which works in the presence of arbitrary forms of noise. The algorithm, A2 (for Agnostic Active), relies only upon the assumption that the samples are drawn i.i.d. from a fixed distribution. We show that A2 achieves an exponential improvement (i.e., requires only O (ln 1/ε) samples to find an ε-optimal classifier) over the usual sample complexity of supervised learning, for several settings considered before in the realizable case. These include learning threshold classifiers and learning homogeneous linear separators with respect to an input distribution which is uniform over the unit sphere.

Cite

Text

Balcan et al. "Agnostic Active Learning." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143853

Markdown

[Balcan et al. "Agnostic Active Learning." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/balcan2006icml-agnostic/) doi:10.1145/1143844.1143853

BibTeX

@inproceedings{balcan2006icml-agnostic,
  title     = {{Agnostic Active Learning}},
  author    = {Balcan, Maria-Florina and Beygelzimer, Alina and Langford, John},
  booktitle = {International Conference on Machine Learning},
  year      = {2006},
  pages     = {65-72},
  doi       = {10.1145/1143844.1143853},
  url       = {https://mlanthology.org/icml/2006/balcan2006icml-agnostic/}
}