Efficient Block Training of Multilayer Perceptrons
Abstract
The attractive possibility of applying layerwise block training algorithms to multilayer perceptrons MLP, which offers initial advantages in computational effort, is refined in this article by means of introducing a sensitivity correction factor in the formulation. This results in a clear performance advantage, which we verify in several applications. The reasons for this advantage are discussed and related to implicit relations with second-order techniques, natural gradient formulations through Fisher's information matrix, and sample selection. Extensions to recurrent networks and other research lines are suggested at the close of the article.
Cite
Text
Navia-Vázquez and Figueiras-Vidal. "Efficient Block Training of Multilayer Perceptrons." Neural Computation, 2000. doi:10.1162/089976600300015448Markdown
[Navia-Vázquez and Figueiras-Vidal. "Efficient Block Training of Multilayer Perceptrons." Neural Computation, 2000.](https://mlanthology.org/neco/2000/naviavazquez2000neco-efficient/) doi:10.1162/089976600300015448BibTeX
@article{naviavazquez2000neco-efficient,
title = {{Efficient Block Training of Multilayer Perceptrons}},
author = {Navia-Vázquez, Ángel and Figueiras-Vidal, Aníbal R.},
journal = {Neural Computation},
year = {2000},
pages = {1429-1447},
doi = {10.1162/089976600300015448},
volume = {12},
url = {https://mlanthology.org/neco/2000/naviavazquez2000neco-efficient/}
}