Connectionist Implementation of a Theory of Generalization
Abstract
Empirically, generalization between a training and a test stimulus falls off in close approximation to an exponential decay function of distance between the two stimuli in the "stimulus space" obtained by multidimensional scaling. Math(cid:173) ematically, this result is derivable from the assumption that an individual takes the training stimulus to belong to a "consequential" region that includes that stimulus but is otherwise of unknown location, size, and shape in the stimulus space (Shepard, 1987). As the individual gains additional information about the consequential region-by finding other stimuli to be consequential or nOl-the theory predicts the shape of the generalization function to change toward the function relating actual probability of the consequence to location in the stimulus space. This paper describes a natural connectionist implementation of the theory, and illustrates how implications of the theory for generalization, discrimination, and classification learning can be explored by connectionist simulation.
Cite
Text
Shepard and Kannappan. "Connectionist Implementation of a Theory of Generalization." Neural Information Processing Systems, 1990.Markdown
[Shepard and Kannappan. "Connectionist Implementation of a Theory of Generalization." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/shepard1990neurips-connectionist/)BibTeX
@inproceedings{shepard1990neurips-connectionist,
title = {{Connectionist Implementation of a Theory of Generalization}},
author = {Shepard, Roger N. and Kannappan, Sheila},
booktitle = {Neural Information Processing Systems},
year = {1990},
pages = {665-671},
url = {https://mlanthology.org/neurips/1990/shepard1990neurips-connectionist/}
}