Learning by Choice of Internal Representations
Abstract
We introduce a learning algorithm for multilayer neural net(cid:173) works composed of binary linear threshold elements. Whereas ex(cid:173) isting algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal repre(cid:173) sentations as the fundamental entities to be determined. Once a correct set of internal representations is arrived at, the weights are found by the local aild biologically plausible Perceptron Learning Rule (PLR). We tested our learning algorithm on four problems: adjacency, symmetry, parity and combined symmetry-parity.
Cite
Text
Grossman et al. "Learning by Choice of Internal Representations." Neural Information Processing Systems, 1988.Markdown
[Grossman et al. "Learning by Choice of Internal Representations." Neural Information Processing Systems, 1988.](https://mlanthology.org/neurips/1988/grossman1988neurips-learning/)BibTeX
@inproceedings{grossman1988neurips-learning,
title = {{Learning by Choice of Internal Representations}},
author = {Grossman, Tal and Meir, Ronny and Domany, Eytan},
booktitle = {Neural Information Processing Systems},
year = {1988},
pages = {73-80},
url = {https://mlanthology.org/neurips/1988/grossman1988neurips-learning/}
}