Dimensionality Reduction and Prior Knowledge in E-Set Recognition
Abstract
It is well known that when an automatic learning algorithm is applied to a fixed corpus of data, the size of the corpus places an upper bound on the number of degrees of freedom that the model can contain if it is to generalize well. Because the amount of hardware in a neural network typically increases with the dimensionality of its inputs, it can be challenging to build a high-performance network for classifying large input patterns. In this paper, several techniques for addressing this problem are discussed in the context of an isolated word recognition task.
Cite
Text
Lang and Hinton. "Dimensionality Reduction and Prior Knowledge in E-Set Recognition." Neural Information Processing Systems, 1989.Markdown
[Lang and Hinton. "Dimensionality Reduction and Prior Knowledge in E-Set Recognition." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/lang1989neurips-dimensionality/)BibTeX
@inproceedings{lang1989neurips-dimensionality,
title = {{Dimensionality Reduction and Prior Knowledge in E-Set Recognition}},
author = {Lang, Kevin J. and Hinton, Geoffrey E.},
booktitle = {Neural Information Processing Systems},
year = {1989},
pages = {178-185},
url = {https://mlanthology.org/neurips/1989/lang1989neurips-dimensionality/}
}