Bayesian Models of Inductive Generalization
Abstract
We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on simi- larity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flex- ible hypothesis spaces, and we propose a version of the Bayesian Oc- cam’s razor that trades off priors and likelihoods to prevent under- or over-generalization in these flexible spaces. We analyze two published data sets on inductive reasoning as well as the results of a new behavioral study that we have carried out.
Cite
Text
Sanjana and Tenenbaum. "Bayesian Models of Inductive Generalization." Neural Information Processing Systems, 2002.Markdown
[Sanjana and Tenenbaum. "Bayesian Models of Inductive Generalization." Neural Information Processing Systems, 2002.](https://mlanthology.org/neurips/2002/sanjana2002neurips-bayesian/)BibTeX
@inproceedings{sanjana2002neurips-bayesian,
title = {{Bayesian Models of Inductive Generalization}},
author = {Sanjana, Neville E. and Tenenbaum, Joshua B.},
booktitle = {Neural Information Processing Systems},
year = {2002},
pages = {59-66},
url = {https://mlanthology.org/neurips/2002/sanjana2002neurips-bayesian/}
}