Distinguishing Rule and Exemplar-Based Generalization in Learning Systems
Abstract
Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations. The trade-off between exemplar- and rule-based generalization has been studied extensively in cognitive psychology; in this work, we present a protocol inspired by these experimental approaches to probe the inductive biases that control this trade-off in category-learning systems such as artificial neural networks. We isolate two such inductive biases: feature-level bias (differences in which features are more readily learned) and exemplar-vs-rule bias (differences in how these learned features are used for generalization of category labels). We find that standard neural network models are feature-biased and have a propensity towards exemplar-based extrapolation; we discuss the implications of these findings for machine-learning research on data augmentation, fairness, and systematic generalization.
Cite
Text
Dasgupta et al. "Distinguishing Rule and Exemplar-Based Generalization in Learning Systems." International Conference on Machine Learning, 2022.Markdown
[Dasgupta et al. "Distinguishing Rule and Exemplar-Based Generalization in Learning Systems." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/dasgupta2022icml-distinguishing/)BibTeX
@inproceedings{dasgupta2022icml-distinguishing,
title = {{Distinguishing Rule and Exemplar-Based Generalization in Learning Systems}},
author = {Dasgupta, Ishita and Grant, Erin and Griffiths, Tom},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {4816-4830},
volume = {162},
url = {https://mlanthology.org/icml/2022/dasgupta2022icml-distinguishing/}
}