A Cost Function for Internal Representations

Abstract

We introduce a cost function for learning in feed-forward neural networks which is an explicit function of the internal representa(cid:173) tion in addition to the weights. The learning problem can then be formulated as two simple perceptrons and a search for internal representations. Back-propagation is recovered as a limit. The frequency of successful solutions is better for this algorithm than for back-propagation when weights and hidden units are updated on the same timescale i.e. once every learning step.

Cite

Text

Krogh et al. "A Cost Function for Internal Representations." Neural Information Processing Systems, 1989.

Markdown

[Krogh et al. "A Cost Function for Internal Representations." Neural Information Processing Systems, 1989.](https://mlanthology.org/neurips/1989/krogh1989neurips-cost/)

BibTeX

@inproceedings{krogh1989neurips-cost,
  title     = {{A Cost Function for Internal Representations}},
  author    = {Krogh, Anders and Thorbergsson, C. I. and Hertz, John A.},
  booktitle = {Neural Information Processing Systems},
  year      = {1989},
  pages     = {733-740},
  url       = {https://mlanthology.org/neurips/1989/krogh1989neurips-cost/}
}