Representing Probabilistic Rules with Networks of Gaussian Basis Functions
Abstract
There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set.
Cite
Text
Tresp et al. "Representing Probabilistic Rules with Networks of Gaussian Basis Functions." Machine Learning, 1997. doi:10.1023/A:1007381408604Markdown
[Tresp et al. "Representing Probabilistic Rules with Networks of Gaussian Basis Functions." Machine Learning, 1997.](https://mlanthology.org/mlj/1997/tresp1997mlj-representing/) doi:10.1023/A:1007381408604BibTeX
@article{tresp1997mlj-representing,
title = {{Representing Probabilistic Rules with Networks of Gaussian Basis Functions}},
author = {Tresp, Volker and Hollatz, Jürgen and Ahmad, Subutai},
journal = {Machine Learning},
year = {1997},
pages = {173-200},
doi = {10.1023/A:1007381408604},
volume = {27},
url = {https://mlanthology.org/mlj/1997/tresp1997mlj-representing/}
}