Recurrence Methods in the Analysis of Learning Processes
Abstract
The goal of most learning processes is to bring a machine into a set of “correct” states. In practice, however, it may be difficult to show that the process enters this target set. We present a condition that ensures that the process visits the target set infinitely often almost surely. This condition is easy to verify and is true for many well-known learning rules. To demonstrate the utility of this method, we apply it to four types of learning processes: the perceptron, learning rules governed by continuous energy functions, the Kohonen rule, and the committee machine.
Cite
Text
Mendelson and Nelken. "Recurrence Methods in the Analysis of Learning Processes." Neural Computation, 2001. doi:10.1162/08997660152469378Markdown
[Mendelson and Nelken. "Recurrence Methods in the Analysis of Learning Processes." Neural Computation, 2001.](https://mlanthology.org/neco/2001/mendelson2001neco-recurrence/) doi:10.1162/08997660152469378BibTeX
@article{mendelson2001neco-recurrence,
title = {{Recurrence Methods in the Analysis of Learning Processes}},
author = {Mendelson, Shahar and Nelken, Israel},
journal = {Neural Computation},
year = {2001},
pages = {1839-1861},
doi = {10.1162/08997660152469378},
volume = {13},
url = {https://mlanthology.org/neco/2001/mendelson2001neco-recurrence/}
}