Dynamics of Generalization in Linear Perceptrons
Abstract
We study the evolution of the generalization ability of a simple linear per(cid:173) ceptron with N inputs which learns to imitate a "teacher perceptron". The system is trained on p = aN binary example inputs and the generaliza(cid:173) tion ability measured by testing for agreement with the teacher on all 2N possible binary input patterns. The dynamics may be solved analytically and exhibits a phase transition from imperfect to perfect generalization at a = 1. Except at this point the generalization ability approaches its asymptotic value exponentially, with critical slowing down near the tran(cid:173) sition; the relaxation time is ex (1 - y'a)-2. Right at the critical point, 1 the approach to perfect generalization follows a power law ex t - '2. In the presence of noise, the generalization ability is degraded by an amount ex (va - 1)-1 just above a = 1.
Cite
Text
Krogh and Hertz. "Dynamics of Generalization in Linear Perceptrons." Neural Information Processing Systems, 1990.Markdown
[Krogh and Hertz. "Dynamics of Generalization in Linear Perceptrons." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/krogh1990neurips-dynamics/)BibTeX
@inproceedings{krogh1990neurips-dynamics,
title = {{Dynamics of Generalization in Linear Perceptrons}},
author = {Krogh, Anders and Hertz, John A.},
booktitle = {Neural Information Processing Systems},
year = {1990},
pages = {897-903},
url = {https://mlanthology.org/neurips/1990/krogh1990neurips-dynamics/}
}