RBF Neural Networks and Descartes' Rule of Signs
Abstract
We establish versions of Descartes’ rule of signs for radial basis function (RBF) neural networks. These RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to derive tight bounds for the Vapnik-Chervonenkis (VC) dimension and pseudo-dimension of these networks. In particular, we show that these dimensions are no more than linear. This result contrasts with previous work showing that RBF neural networks with two and more input nodes have superlinear VC dimension. The rules give rise also to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.
Cite
Text
Schmitt. "RBF Neural Networks and Descartes' Rule of Signs." International Conference on Algorithmic Learning Theory, 2002. doi:10.1007/3-540-36169-3_26Markdown
[Schmitt. "RBF Neural Networks and Descartes' Rule of Signs." International Conference on Algorithmic Learning Theory, 2002.](https://mlanthology.org/alt/2002/schmitt2002alt-rbf/) doi:10.1007/3-540-36169-3_26BibTeX
@inproceedings{schmitt2002alt-rbf,
title = {{RBF Neural Networks and Descartes' Rule of Signs}},
author = {Schmitt, Michael},
booktitle = {International Conference on Algorithmic Learning Theory},
year = {2002},
pages = {321-335},
doi = {10.1007/3-540-36169-3_26},
url = {https://mlanthology.org/alt/2002/schmitt2002alt-rbf/}
}