Reasoning About Operationality for Explanation-Based Learning

Abstract

The results of explanation-based learning must be operational. This paper argues that since the generalizations created by explanation-based learning may be conditionally operational for finer-grained definitions of operationality, the conditions on the operationality of a generalization should be determined and included with the generalization. This paper describes an implementation of explanation-based learning called ROE that proves that the predicates used in a generalization are operational for the training instance upon which it is based, and generalizes the proof of operationality to determine the weakest conditions on when the generalization should be used. These conditions are included in the results of ROE to restrict their use to situations in which they are operational. Reasoning about and generalizing operationality is accomplished by using ROE itself on a domain theory that concludes whether predicates are operational for whatever notion of operationality the user encodes. A PROLOG implementation of ROE is included as an appendix.

Cite

Text

Hirsh. "Reasoning About Operationality for Explanation-Based Learning." International Conference on Machine Learning, 1988. doi:10.1016/B978-0-934613-64-4.50028-1

Markdown

[Hirsh. "Reasoning About Operationality for Explanation-Based Learning." International Conference on Machine Learning, 1988.](https://mlanthology.org/icml/1988/hirsh1988icml-reasoning/) doi:10.1016/B978-0-934613-64-4.50028-1

BibTeX

@inproceedings{hirsh1988icml-reasoning,
  title     = {{Reasoning About Operationality for Explanation-Based Learning}},
  author    = {Hirsh, Haym},
  booktitle = {International Conference on Machine Learning},
  year      = {1988},
  pages     = {214-220},
  doi       = {10.1016/B978-0-934613-64-4.50028-1},
  url       = {https://mlanthology.org/icml/1988/hirsh1988icml-reasoning/}
}