Complexity-Based Induction

Abstract

A central problem in inductive logic programming is theory evaluation. Without some sort of preference criterion, any two theories that explain a set of examples are equally acceptable. This paper presents a scheme for evaluating alternative inductive theories based on an objective preference criterion. It strives to extract maximal redundancy from examples, transforming structure into randomness. A major strength of the method is its application to learning problems where negative examples of concepts are scarce or unavailable. A new measure called model complexity is introduced, and its use is illustrated and compared with a proof complexity measure on relational learning tasks. The complementarity of model and proof complexity parallels that of model and proof-theoretic semantics. Model complexity, where applicable, seems to be an appropriate measure for evaluating inductive logic theories.

Cite

Text

Conklin and Witten. "Complexity-Based Induction." Machine Learning, 1994. doi:10.1007/BF00993307

Markdown

[Conklin and Witten. "Complexity-Based Induction." Machine Learning, 1994.](https://mlanthology.org/mlj/1994/conklin1994mlj-complexitybased/) doi:10.1007/BF00993307

BibTeX

@article{conklin1994mlj-complexitybased,
  title     = {{Complexity-Based Induction}},
  author    = {Conklin, Darrell and Witten, Ian H.},
  journal   = {Machine Learning},
  year      = {1994},
  pages     = {203-225},
  doi       = {10.1007/BF00993307},
  volume    = {16},
  url       = {https://mlanthology.org/mlj/1994/conklin1994mlj-complexitybased/}
}