Analysis of Perceptron-Based Active Learning

Abstract

We start by showing that in an active learning setting, the Perceptron algorithm needs Ω(1/ε2) labels to learn linear separators within generalization error ε. We then present a simple active learning algorithm for this problem, which combines a modification of the Perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error ε after asking for just Õ(d log 1/ε) labels. This exponential improvement over the usual sample complexity of supervised learning had previously been demonstrated only for the computationally more complex query-by-committee algorithm.

Cite

Text

Dasgupta et al. "Analysis of Perceptron-Based Active Learning." Journal of Machine Learning Research, 2009.

Markdown

[Dasgupta et al. "Analysis of Perceptron-Based Active Learning." Journal of Machine Learning Research, 2009.](https://mlanthology.org/jmlr/2009/dasgupta2009jmlr-analysis/)

BibTeX

@article{dasgupta2009jmlr-analysis,
  title     = {{Analysis of Perceptron-Based Active Learning}},
  author    = {Dasgupta, Sanjoy and Kalai, Adam Tauman and Monteleoni, Claire},
  journal   = {Journal of Machine Learning Research},
  year      = {2009},
  pages     = {281-299},
  volume    = {10},
  url       = {https://mlanthology.org/jmlr/2009/dasgupta2009jmlr-analysis/}
}