Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms

Abstract

The paper proposes a general framework that encompasses the training of neural networks and the adaptation of filters. We show that neural networks can be considered as general nonlinear filters that can be trained adaptively, that is, that can undergo continual training with a possibly infinite number of time-ordered examples. We introduce the canonical form of a neural network. This canonical form permits a unified presentation of network architectures and of gradient-based training algorithms for both feedforward networks (transversal filters) and feedback networks (recursive filters). We show that several algorithms used classically in linear adaptive filtering, and some algorithms suggested by other authors for training neural networks, are special cases in a general classification of training algorithms for feedback networks.

Cite

Text

Nerrand et al. "Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms." Neural Computation, 1993. doi:10.1162/NECO.1993.5.2.165

Markdown

[Nerrand et al. "Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms." Neural Computation, 1993.](https://mlanthology.org/neco/1993/nerrand1993neco-neural/) doi:10.1162/NECO.1993.5.2.165

BibTeX

@article{nerrand1993neco-neural,
  title     = {{Neural Networks and Nonlinear Adaptive Filtering: Unifying Concepts and New Algorithms}},
  author    = {Nerrand, Olivier and Roussel-Ragot, Pierre and Personnaz, Léon and Dreyfus, Gérard and Marcos, Sylvie},
  journal   = {Neural Computation},
  year      = {1993},
  pages     = {165-199},
  doi       = {10.1162/NECO.1993.5.2.165},
  volume    = {5},
  url       = {https://mlanthology.org/neco/1993/nerrand1993neco-neural/}
}