Rules and Similarity in Concept Learning
Abstract
This paper argues that two apparently distinct modes of generalizing con(cid:173) cepts - abstracting rules and computing similarity to exemplars - should both be seen as special cases of a more general Bayesian learning frame(cid:173) work. Bayes explains the specific workings of these two modes - which rules are abstracted, how similarity is measured - as well as why gener(cid:173) alization should appear rule- or similarity-based in different situations. This analysis also suggests why the rules/similarity distinction, even if not computationally fundamental, may still be useful at the algorithmic level as part of a principled approximation to fully Bayesian learning.
Cite
Text
Tenenbaum. "Rules and Similarity in Concept Learning." Neural Information Processing Systems, 1999.Markdown
[Tenenbaum. "Rules and Similarity in Concept Learning." Neural Information Processing Systems, 1999.](https://mlanthology.org/neurips/1999/tenenbaum1999neurips-rules/)BibTeX
@inproceedings{tenenbaum1999neurips-rules,
title = {{Rules and Similarity in Concept Learning}},
author = {Tenenbaum, Joshua B.},
booktitle = {Neural Information Processing Systems},
year = {1999},
pages = {59-65},
url = {https://mlanthology.org/neurips/1999/tenenbaum1999neurips-rules/}
}