Incremental Active Learning for Optimal Generalization

Abstract

The problem of designing input signals for optimal generalization is called active learning. In this article, we give a two-stage sampling scheme for reducing both the bias and variance, and based on this scheme, we propose two active learning methods. One is the multipoint search method applicable to arbitrary models. The effectiveness of this method is shown through computer simulations. The other is the optimal sampling method in trigonometric polynomial models. This method precisely specifies the optimal sampling locations.

Cite

Text

Sugiyama and Ogawa. "Incremental Active Learning for Optimal Generalization." Neural Computation, 2001. doi:10.1162/089976600300014773

Markdown

[Sugiyama and Ogawa. "Incremental Active Learning for Optimal Generalization." Neural Computation, 2001.](https://mlanthology.org/neco/2001/sugiyama2001neco-incremental/) doi:10.1162/089976600300014773

BibTeX

@article{sugiyama2001neco-incremental,
  title     = {{Incremental Active Learning for Optimal Generalization}},
  author    = {Sugiyama, Masashi and Ogawa, Hidemitsu},
  journal   = {Neural Computation},
  year      = {2001},
  pages     = {2909-2940},
  doi       = {10.1162/089976600300014773},
  volume    = {12},
  url       = {https://mlanthology.org/neco/2001/sugiyama2001neco-incremental/}
}