Experiments in Non-Monotonic Learning
Abstract
Logic continues to have a significant role throughout AI. However, it has long been proposed that for real-world problems classical logic is unsatisfactory (e.g., where nonmonotonic reasoning may be required). The construction of incremental learning systems is a case in point. A technique called Closed-World Specialisation was recently developed to address the problem of correcting first-order theories within a non-monotonic framework for incremental learning. In this paper we report on experiments combining this technique with two methods of generalisation in first-order logic. A new inductively generated solution giving 100% predictive accuracy is presented for the task of learning rules of illegality for the KRK chess end-game.
Cite
Text
Bain. "Experiments in Non-Monotonic Learning." International Conference on Machine Learning, 1991. doi:10.1016/B978-1-55860-200-7.50078-7Markdown
[Bain. "Experiments in Non-Monotonic Learning." International Conference on Machine Learning, 1991.](https://mlanthology.org/icml/1991/bain1991icml-experiments/) doi:10.1016/B978-1-55860-200-7.50078-7BibTeX
@inproceedings{bain1991icml-experiments,
title = {{Experiments in Non-Monotonic Learning}},
author = {Bain, Michael},
booktitle = {International Conference on Machine Learning},
year = {1991},
pages = {380-384},
doi = {10.1016/B978-1-55860-200-7.50078-7},
url = {https://mlanthology.org/icml/1991/bain1991icml-experiments/}
}