ILP with Noise and Fixed Example Size: A Bayesian Approach

Abstract

Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implicit expectation that the cardinality of the positive and negative examples reflect the "proportion" of the concept to the instance space. A framework for learning from noisy data and fixed example size is presented. A Bayesian heuristic for finding the most probable hypothesis in this general framework is derived. This approach evaluates a hypothesis as a whole rather than one clause at a time. The heuristic, which has nice theoretical properties, is incorporated in an ILP system, Lime. Experimental results show that Lime handles noise better than FOIL and PROGOL. It is able to learn recursive definitions from noisy data on which other systems do not perform well. Lime is also capable of learn...

Cite

Text

McCreath and Sharma. "ILP with Noise and Fixed Example Size: A Bayesian Approach." International Joint Conference on Artificial Intelligence, 1997.

Markdown

[McCreath and Sharma. "ILP with Noise and Fixed Example Size: A Bayesian Approach." International Joint Conference on Artificial Intelligence, 1997.](https://mlanthology.org/ijcai/1997/mccreath1997ijcai-ilp/)

BibTeX

@inproceedings{mccreath1997ijcai-ilp,
  title     = {{ILP with Noise and Fixed Example Size: A Bayesian Approach}},
  author    = {McCreath, Eric and Sharma, Arun},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {1997},
  pages     = {1310-1315},
  url       = {https://mlanthology.org/ijcai/1997/mccreath1997ijcai-ilp/}
}