Relaxation Networks for Large Supervised Learning Problems

Abstract

Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.

Cite

Text

Alspector et al. "Relaxation Networks for Large Supervised Learning Problems." Neural Information Processing Systems, 1990.

Markdown

[Alspector et al. "Relaxation Networks for Large Supervised Learning Problems." Neural Information Processing Systems, 1990.](https://mlanthology.org/neurips/1990/alspector1990neurips-relaxation/)

BibTeX

@inproceedings{alspector1990neurips-relaxation,
  title     = {{Relaxation Networks for Large Supervised Learning Problems}},
  author    = {Alspector, Joshua and Allen, Robert B. and Jayakumar, Anthony and Zeppenfeld, Torsten and Meir, Ronny},
  booktitle = {Neural Information Processing Systems},
  year      = {1990},
  pages     = {1015-1021},
  url       = {https://mlanthology.org/neurips/1990/alspector1990neurips-relaxation/}
}