Learning and Using Relational Theories

Abstract

Much of human knowledge is organized into sophisticated systems that are often called intuitive theories. We propose that intuitive theories are mentally repre- sented in a logical language, and that the subjective complexity of a theory is determined by the length of its representation in this language. This complexity measure helps to explain how theories are learned from relational data, and how they support inductive inferences about unobserved relations. We describe two experiments that test our approach, and show that it provides a better account of human learning and reasoning than an approach developed by Goodman [1].

Cite

Text

Kemp et al. "Learning and Using Relational Theories." Neural Information Processing Systems, 2007.

Markdown

[Kemp et al. "Learning and Using Relational Theories." Neural Information Processing Systems, 2007.](https://mlanthology.org/neurips/2007/kemp2007neurips-learning/)

BibTeX

@inproceedings{kemp2007neurips-learning,
  title     = {{Learning and Using Relational Theories}},
  author    = {Kemp, Charles and Goodman, Noah and Tenenbaum, Joshua B.},
  booktitle = {Neural Information Processing Systems},
  year      = {2007},
  pages     = {753-760},
  url       = {https://mlanthology.org/neurips/2007/kemp2007neurips-learning/}
}