Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines
Abstract
Strong inductive biases give humans the ability to quickly learn to perform a variety of tasks. Although meta-learning is a method to endow neural networks with useful inductive biases, agents trained by meta-learning may sometimes acquire very different strategies from humans. We show that co-training these agents on predicting representations from natural language task descriptions and programs induced to generate such tasks guides them toward more human-like inductive biases. Human-generated language descriptions and program induction models that add new learned primitives both contain abstract concepts that can compress description length. Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
Cite
Text
Kumar et al. "Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines." Neural Information Processing Systems, 2022.Markdown
[Kumar et al. "Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/kumar2022neurips-using/)BibTeX
@inproceedings{kumar2022neurips-using,
title = {{Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines}},
author = {Kumar, Sreejan and Correa, Carlos G. and Dasgupta, Ishita and Marjieh, Raja and Hu, Michael Y and Hawkins, Robert and Cohen, Jonathan D and Daw, Nathaniel and Narasimhan, Karthik and Griffiths, Tom},
booktitle = {Neural Information Processing Systems},
year = {2022},
url = {https://mlanthology.org/neurips/2022/kumar2022neurips-using/}
}