Using Symbolic Learning to Improve Knowledge-Based Neural Networks
Abstract
The previously-reported Kbann system integrates existing knowledge into neural networks by defining the network topology and setting initial link weights. Standard neural learning techniques can then be used to train such networks, thereby refining the information upon which the network is based. However, standard neural learning techniques are reputed to have difficulty training networks with multiple layers of hidden units; Kbann commonly creates such networks. In addition, standard neural learning techniques ignore some of the information contained in the networks created by Kbann. This paper describes a symbolic inductive learning algorithm for training such networks that uses this previously-ignored information and which helps to address the problems of training "deep" networks. Empirical evidence shows that this method improves not only learning speed, but also the ability of networks to generalize correctly to testing examples. Introduction Kbann is a "hybrid" learning system; ...
Cite
Text
Towell and Shavlik. "Using Symbolic Learning to Improve Knowledge-Based Neural Networks." AAAI Conference on Artificial Intelligence, 1992.Markdown
[Towell and Shavlik. "Using Symbolic Learning to Improve Knowledge-Based Neural Networks." AAAI Conference on Artificial Intelligence, 1992.](https://mlanthology.org/aaai/1992/towell1992aaai-using/)BibTeX
@inproceedings{towell1992aaai-using,
title = {{Using Symbolic Learning to Improve Knowledge-Based Neural Networks}},
author = {Towell, Geoffrey G. and Shavlik, Jude W.},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1992},
pages = {177-182},
url = {https://mlanthology.org/aaai/1992/towell1992aaai-using/}
}