Convergent Decomposition Techniques for Training RBF Neural Networks

Abstract

In this article we define globally convergent decomposition algorithms for supervised training of generalized radial basis function neural networks. First, we consider training algorithms based on the two-block decomposition of the network parameters into the vector of weights and the vector of centers. Then we define a decomposition algorithm in which the selection of the center locations is split into sequential minimizations with respect to each center, and we give a suitable criterion for choosing the centers that must be updated at each step. We prove the global convergence of the proposed algorithms and report the computational results obtained for a set of test problems.

Cite

Text

Buzzi et al. "Convergent Decomposition Techniques for Training RBF Neural Networks." Neural Computation, 2001. doi:10.1162/08997660152469396

Markdown

[Buzzi et al. "Convergent Decomposition Techniques for Training RBF Neural Networks." Neural Computation, 2001.](https://mlanthology.org/neco/2001/buzzi2001neco-convergent/) doi:10.1162/08997660152469396

BibTeX

@article{buzzi2001neco-convergent,
  title     = {{Convergent Decomposition Techniques for Training RBF Neural Networks}},
  author    = {Buzzi, C. and Grippo, Luigi and Sciandrone, Marco},
  journal   = {Neural Computation},
  year      = {2001},
  pages     = {1891-1920},
  doi       = {10.1162/08997660152469396},
  volume    = {13},
  url       = {https://mlanthology.org/neco/2001/buzzi2001neco-convergent/}
}