Including Hints in Training Neural Nets

Abstract

The aim of a neural net is to partition the data space into near optimal decision regions. Learning such a partitioning solely from examples has proven to be a very hard problem (Blum and Rivest 1988; Judd 1988). To remedy this, we use the idea of supplying hints to the network—as discussed by Abu-Mostafa (1990). Hints reduce the solution space, and as a consequence speed up the learning process. The minimum Hamming distance between the patterns serves as the hint. Next, it is shown how to learn such a hint and how to incorporate it into the learning algorithm. Modifications in the net structure and its operation are suggested, which allow for a better generalization. The sensitivity to errors in such a hint is studied through some simulations.

Cite

Text

Al-Mashouq and Reed. "Including Hints in Training Neural Nets." Neural Computation, 1991. doi:10.1162/NECO.1991.3.3.418

Markdown

[Al-Mashouq and Reed. "Including Hints in Training Neural Nets." Neural Computation, 1991.](https://mlanthology.org/neco/1991/almashouq1991neco-including/) doi:10.1162/NECO.1991.3.3.418

BibTeX

@article{almashouq1991neco-including,
  title     = {{Including Hints in Training Neural Nets}},
  author    = {Al-Mashouq, Khalid A. and Reed, Irving S.},
  journal   = {Neural Computation},
  year      = {1991},
  pages     = {418-427},
  doi       = {10.1162/NECO.1991.3.3.418},
  volume    = {3},
  url       = {https://mlanthology.org/neco/1991/almashouq1991neco-including/}
}