Iterated Phantom Induction: A Little Knowledge Can Go a Long Way
Abstract
We advance a knowledge-based learning method that augments conventional generalization to permit con-cept acquisition in failure domains. These are domains in which learning must proceed exclusively with failure examples that are relatively uninformative for conven-tional methods. A domain theory is used to explain and then systematically perturb the observed failures so that they can be treated as if they were positive training examples. The concept induced from these "phantom " examples is exercised in the world, yield-ing additional observations, and the process repeats. Surprisingly, an accurate concept can often be learned even if the phantom examples are themselves failures and the domain theory is only imprecise and approxi-mate. We investigate the behavior of the method in a stylized air-hockey domain which demands a nonlin-ear decision concept. Learning is shown empirically to be robust in the face of degraded domain knowl-edge. An interpretation is advanced which indicates that the information available from a plausible quali-tative domain theory is sufficient for robust successful learning.
Cite
Text
Brodie and DeJong. "Iterated Phantom Induction: A Little Knowledge Can Go a Long Way." AAAI Conference on Artificial Intelligence, 1998.Markdown
[Brodie and DeJong. "Iterated Phantom Induction: A Little Knowledge Can Go a Long Way." AAAI Conference on Artificial Intelligence, 1998.](https://mlanthology.org/aaai/1998/brodie1998aaai-iterated/)BibTeX
@inproceedings{brodie1998aaai-iterated,
title = {{Iterated Phantom Induction: A Little Knowledge Can Go a Long Way}},
author = {Brodie, Mark and DeJong, Gerald},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {1998},
pages = {665-670},
url = {https://mlanthology.org/aaai/1998/brodie1998aaai-iterated/}
}