Learning from Deliberated Reactivity
Abstract
An important problem in machine learning research is acquiring the information necessary for learning new behaviors. This problem is particularly difficult when the goal is to analytically learn rules for new reactive strategies, because a primary goal of reactive systems is to avoid computing and maintaining the complex information that is necessary for analytic learning. We describe a solution to this problem, in which a planner (which is invoked only when necessary) uses the reactivity rules in the course of planning. These invocations of the reactive rules are thus annotated by planning information, such as goal-subgoal-action relations, that is necessary for explanation-based learning. We discuss this approach in the context of the CASTLE system1, which learns strategies in the domain of chess.
Cite
Text
Krulwich. "Learning from Deliberated Reactivity." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50066-0Markdown
[Krulwich. "Learning from Deliberated Reactivity." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/krulwich1991icml-learning/) doi:10.1016/B978-1-55860-200-7.50066-0BibTeX
@inproceedings{krulwich1991icml-learning,
title = {{Learning from Deliberated Reactivity}},
author = {Krulwich, Bruce},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {318-322},
doi = {10.1016/B978-1-55860-200-7.50066-0},
url = {https://mlanthology.org/icml/1991/krulwich1991icml-learning/}
}