Improving Generalization Capabilities of Dynamic Neural Networks
Abstract
This work addresses the problem of improving the generalization capabilities of continuous recurrent neural networks. The learning task is transformed into an optimal control framework in which the weights and the initial network state are treated as unknown controls. A new learning algorithm based on a variational formulation of Pontrayagin's maximum principle is proposed. Under reasonable assumptions, its convergence is discussed. Numerical examples are given that demonstrate an essential improvement of generalization capabilities after the learning process of a dynamic network.
Cite
Text
Galicki et al. "Improving Generalization Capabilities of Dynamic Neural Networks." Neural Computation, 2004. doi:10.1162/089976604773717603Markdown
[Galicki et al. "Improving Generalization Capabilities of Dynamic Neural Networks." Neural Computation, 2004.](https://mlanthology.org/neco/2004/galicki2004neco-improving/) doi:10.1162/089976604773717603BibTeX
@article{galicki2004neco-improving,
title = {{Improving Generalization Capabilities of Dynamic Neural Networks}},
author = {Galicki, Miroslaw and Leistritz, Lutz and Zwick, Ernst Bernhard and Witte, Herbert},
journal = {Neural Computation},
year = {2004},
pages = {1253-1282},
doi = {10.1162/089976604773717603},
volume = {16},
url = {https://mlanthology.org/neco/2004/galicki2004neco-improving/}
}