Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language
Abstract
A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however, can efficiently learn a broad range of concepts. We introduce a model of inductive learning that seeks to be human-like in that sense.It implements a Bayesian reasoning process where a language model first proposes candidate hypotheses expressed in natural language, which are then re-weighed by a prior and a likelihood.By estimating the prior from human data, we can predict human judgments on learning problems involving numbers and sets, spanning concepts that are generative, discriminative, propositional, and higher-order.
Cite
Text
Ellis. "Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language." Neural Information Processing Systems, 2023.Markdown
[Ellis. "Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/ellis2023neurips-humanlike/)BibTeX
@inproceedings{ellis2023neurips-humanlike,
title = {{Human-like Few-Shot Learning via Bayesian Reasoning over Natural Language}},
author = {Ellis, Kevin},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/ellis2023neurips-humanlike/}
}