Induction from the General to the More General

Abstract

Most learning-theoretic analyses of inductive problems assume that the evidence provided to the theorist is quantifier-free. But scientific methodologists and artificial intelligence programmers have long assumed that theorists receive universal claims as inputs. What are the limits on inductive inference from laws? How much more expressive can the language of a theory be than the language of the data it is reliably inferred from? This paper presents upper and lower bounds on the the relationship between quantifier complexity in the data and the quantifier complexity of the theory to be reliably inferred from this data. The lower bounds are established with the help of the Keisler n-sandwich theorem, a powerful model theoretic preservation theorem. This application suggests further fruitful applications of model theory to diagonal arguments in learning theory.

Cite

Text

Kelly. "Induction from the General to the More General." Annual Conference on Computational Learning Theory, 1989. doi:10.1016/B978-0-08-094829-4.50027-1

Markdown

[Kelly. "Induction from the General to the More General." Annual Conference on Computational Learning Theory, 1989.](https://mlanthology.org/colt/1989/kelly1989colt-induction/) doi:10.1016/B978-0-08-094829-4.50027-1

BibTeX

@inproceedings{kelly1989colt-induction,
  title     = {{Induction from the General to the More General}},
  author    = {Kelly, Kevin T.},
  booktitle = {Annual Conference on Computational Learning Theory},
  year      = {1989},
  pages     = {334-348},
  doi       = {10.1016/B978-0-08-094829-4.50027-1},
  url       = {https://mlanthology.org/colt/1989/kelly1989colt-induction/}
}