Learning in Large Linear Perceptrons and Why the Thermodynamic Limit Is Relevant to the Real World
Abstract
We present a new method for obtaining the response function 9 and its average G from which most of the properties of learning and generalization in linear perceptrons can be derived. We first rederive the known results for the 'thermodynamic limit' of infinite perceptron size N and show explicitly that 9 is self-averaging in this limit. We then discuss extensions of our method to more gen(cid:173) eral learning scenarios with anisotropic teacher space priors, input distributions, and weight decay terms. Finally, we use our method to calculate the finite N corrections of order 1/ N to G and discuss the corresponding finite size effects on generalization and learning dynamics. An important spin-off is the observation that results obtained in the thermodynamic limit are often directly relevant to systems of fairly modest, 'real-world' sizes.
Cite
Text
Sollich. "Learning in Large Linear Perceptrons and Why the Thermodynamic Limit Is Relevant to the Real World." Neural Information Processing Systems, 1994.Markdown
[Sollich. "Learning in Large Linear Perceptrons and Why the Thermodynamic Limit Is Relevant to the Real World." Neural Information Processing Systems, 1994.](https://mlanthology.org/neurips/1994/sollich1994neurips-learning/)BibTeX
@inproceedings{sollich1994neurips-learning,
title = {{Learning in Large Linear Perceptrons and Why the Thermodynamic Limit Is Relevant to the Real World}},
author = {Sollich, Peter},
booktitle = {Neural Information Processing Systems},
year = {1994},
pages = {207-214},
url = {https://mlanthology.org/neurips/1994/sollich1994neurips-learning/}
}