MLP in Layer-Wise Form with Applications to Weight Decay

Abstract

A simple and general calculus for the sensitivity analysis of a feedforward MLP network in a layer-wise form is presented. Based on the local optimality conditions, some consequences for the least-means-squares learning problem are stated and further discussed. Numerical experiments with formulation and comparison of different weight decay techniques are included.

Cite

Text

Kärkkäinen. "MLP in Layer-Wise Form with Applications to Weight Decay." Neural Computation, 2002. doi:10.1162/089976602753713016

Markdown

[Kärkkäinen. "MLP in Layer-Wise Form with Applications to Weight Decay." Neural Computation, 2002.](https://mlanthology.org/neco/2002/karkkainen2002neco-mlp/) doi:10.1162/089976602753713016

BibTeX

@article{karkkainen2002neco-mlp,
  title     = {{MLP in Layer-Wise Form with Applications to Weight Decay}},
  author    = {Kärkkäinen, Tommi},
  journal   = {Neural Computation},
  year      = {2002},
  pages     = {1451-1480},
  doi       = {10.1162/089976602753713016},
  volume    = {14},
  url       = {https://mlanthology.org/neco/2002/karkkainen2002neco-mlp/}
}