Four Types of Learning Curves
Abstract
If machines are learning to make decisions given a number of examples, the generalization error ε(t) is defined as the average probability that an incorrect decision is made for a new example by a machine when trained with t examples. The generalization error decreases as t increases, and the curve ε(t) is called a learning curve. The present paper uses the Bayesian approach to show that given the annealed approximation, learning curves can be classified into four asymptotic types. If the machine is deterministic with noiseless teacher signals, then (1) ε ∼ at-1 when the correct machine parameter is unique, and (2) ε ∼ at-2 when the set of the correct parameters has a finite measure. If the teacher signals are noisy, then (3) ε ∼ at-1/2 for a deterministic machine, and (4) ε ∼ c + at-1 for a stochastic machine.
Cite
Text
Amari et al. "Four Types of Learning Curves." Neural Computation, 1992. doi:10.1162/NECO.1992.4.4.605Markdown
[Amari et al. "Four Types of Learning Curves." Neural Computation, 1992.](https://mlanthology.org/neco/1992/amari1992neco-four/) doi:10.1162/NECO.1992.4.4.605BibTeX
@article{amari1992neco-four,
title = {{Four Types of Learning Curves}},
author = {Amari, Shun-ichi and Fujita, Naotake and Shinomoto, Shigeru},
journal = {Neural Computation},
year = {1992},
pages = {605-618},
doi = {10.1162/NECO.1992.4.4.605},
volume = {4},
url = {https://mlanthology.org/neco/1992/amari1992neco-four/}
}