Learning from Plausible Explanations
Abstract
This chapter explores the incomplete-theory problem in which a learning system has an explicit domain theory that cannot generate an explanation for every example. The general method is to use the existing domain theory to generate a plausible explanation of the example and to extract from it one or more rules that may then be added to the domain theory. This method is an application of abductive reasoning in that it is attempting to account for a known conclusion (the goal concept) by proposing various hypotheses, which, together with the existing domain theory, may account for it. If a complete explanation can be created for an example, the domain theory is adequate and need not be extended. If the example cannot be completely explained, there are usually many partial explanations that can be generated to explain it. The chapter presents the implementation of a prototype system that is able to extend its domain theory this way. The goal of this method is to increase the explanatory power of the domain theory rather than to acquire a specific way of recognizing instances of the goal concept.
Cite
Text
Fawcett. "Learning from Plausible Explanations." International Conference on Machine Learning, 1989. doi:10.1016/B978-1-55860-036-2.50015-1Markdown
[Fawcett. "Learning from Plausible Explanations." International Conference on Machine Learning, 1989.](https://mlanthology.org/icml/1989/fawcett1989icml-learning/) doi:10.1016/B978-1-55860-036-2.50015-1BibTeX
@inproceedings{fawcett1989icml-learning,
title = {{Learning from Plausible Explanations}},
author = {Fawcett, Tom},
booktitle = {International Conference on Machine Learning},
year = {1989},
pages = {37-39},
doi = {10.1016/B978-1-55860-036-2.50015-1},
url = {https://mlanthology.org/icml/1989/fawcett1989icml-learning/}
}