Integrating Inductive Neural Network Learning and Explanation-Based Learning
Abstract
Many researchers have noted the importance of combining inductive and analytical learning, yet we still lack combined learning methods that are effective in practice. We present here a learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations. This method, called explanation-based neural network learning (EBNN), is based on a neural network representation of domain knowledge. Explanations are constructed by chaining together inferences from multiple neural networks. In contrast with symbolic approaches to explanation-based learning which extract weakest preconditions from the explanation, EBNN extracts the derivatives of the target concept with respect to the training example features. These derivatives summarize the dependencies within the explanation, and are used to bias the inductive learning of the target concept. Experimental results on a simulated robot control task show that E...
Cite
Text
Thrun and Mitchell. "Integrating Inductive Neural Network Learning and Explanation-Based Learning." International Joint Conference on Artificial Intelligence, 1993.Markdown
[Thrun and Mitchell. "Integrating Inductive Neural Network Learning and Explanation-Based Learning." International Joint Conference on Artificial Intelligence, 1993.](https://mlanthology.org/ijcai/1993/thrun1993ijcai-integrating/)BibTeX
@inproceedings{thrun1993ijcai-integrating,
title = {{Integrating Inductive Neural Network Learning and Explanation-Based Learning}},
author = {Thrun, Sebastian and Mitchell, Tom M.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {1993},
pages = {930-936},
url = {https://mlanthology.org/ijcai/1993/thrun1993ijcai-integrating/}
}