A "Thermal" Perceptron Learning Rule

Abstract

The thermal perceptron is a simple extension to Rosenblatt's perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, T0, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gradient descent. The learning rule stabilizes the weights (learns) over a fixed training period. For separable problems it finds separating weights much more quickly than the usual rules.

Cite

Text

Frean. "A "Thermal" Perceptron Learning Rule." Neural Computation, 1992. doi:10.1162/NECO.1992.4.6.946

Markdown

[Frean. "A "Thermal" Perceptron Learning Rule." Neural Computation, 1992.](https://mlanthology.org/neco/1992/frean1992neco-thermal/) doi:10.1162/NECO.1992.4.6.946

BibTeX

@article{frean1992neco-thermal,
  title     = {{A "Thermal" Perceptron Learning Rule}},
  author    = {Frean, Marcus R.},
  journal   = {Neural Computation},
  year      = {1992},
  pages     = {946-957},
  doi       = {10.1162/NECO.1992.4.6.946},
  volume    = {4},
  url       = {https://mlanthology.org/neco/1992/frean1992neco-thermal/}
}