Learning by Incomplete Explanation-Based Learning
Abstract
It has been observed that the addition of clauses acquired by explanation-based learning (EBL) may decrease the efficiency of a system. The main reason for the problem is the introduction of search state redundancy and path redundancy. Previous approaches to eliminate redundancy due to EBL have considered path redundancy only, which has shown to be the least significant of the two types. In this paper we present an EBL algorithm for Horn clause theories, called EGU (Example-Guided Unfolding), that does not introduce search state redundancy. It is based upon the observation made by several researchers that EBL bears strong resemblance to partial evaluation in the area of logic programming. It is shown by the algorithm EGU that a training example can be used to guide unfolding while maintaining completeness, which previously has been questioned. Experimental results are presented showing that the problem of decreasing efficiency can be substantially reduced when search state redundancy is eliminated.
Cite
Text
Bhatnagar. "Learning by Incomplete Explanation-Based Learning." International Conference on Machine Learning, 1992. doi:10.1016/B978-1-55860-247-2.50010-3Markdown
[Bhatnagar. "Learning by Incomplete Explanation-Based Learning." International Conference on Machine Learning, 1992.](https://mlanthology.org/icml/1992/bhatnagar1992icml-learning/) doi:10.1016/B978-1-55860-247-2.50010-3BibTeX
@inproceedings{bhatnagar1992icml-learning,
title = {{Learning by Incomplete Explanation-Based Learning}},
author = {Bhatnagar, Neeraj},
booktitle = {International Conference on Machine Learning},
year = {1992},
pages = {37-42},
doi = {10.1016/B978-1-55860-247-2.50010-3},
url = {https://mlanthology.org/icml/1992/bhatnagar1992icml-learning/}
}