Learning in the Recurrent Random Neural Network
Abstract
The capacity to learn from examples is one of the most desirable features of neural network models. We present a learning algorithm for the recurrent random network model (Gelenbe 1989, 1990) using gradient descent of a quadratic error function. The analytical properties of the model lead to a "backpropagation" type algorithm that requires the solution of a system of n linear and n nonlinear equations each time the n-neuron network "learns" a new input-output pair.
Cite
Text
Gelenbe. "Learning in the Recurrent Random Neural Network." Neural Computation, 1993. doi:10.1162/NECO.1993.5.1.154Markdown
[Gelenbe. "Learning in the Recurrent Random Neural Network." Neural Computation, 1993.](https://mlanthology.org/neco/1993/gelenbe1993neco-learning/) doi:10.1162/NECO.1993.5.1.154BibTeX
@article{gelenbe1993neco-learning,
title = {{Learning in the Recurrent Random Neural Network}},
author = {Gelenbe, Erol},
journal = {Neural Computation},
year = {1993},
pages = {154-164},
doi = {10.1162/NECO.1993.5.1.154},
volume = {5},
url = {https://mlanthology.org/neco/1993/gelenbe1993neco-learning/}
}