A Benchmark for Systematic Generalization in Grounded Language Understanding
Abstract
Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts ("greet the pink brontosaurus by the ferris wheel"). Modern neural networks, by contrast, struggle to interpret novel compositions. In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding. Going beyond a related benchmark that focused on syntactic aspects of generalization, gSCAN defines a language grounded in the states of a grid world, facilitating novel evaluations of acquiring linguistically motivated rules. For example, agents must understand how adjectives such as 'small' are interpreted relative to the current world state or how adverbs such as 'cautiously' combine with new verbs. We test a strong multi-modal baseline model and a state-of-the-art compositional method finding that, in most cases, they fail dramatically when generalization requires systematic compositional rules.
Cite
Text
Ruis et al. "A Benchmark for Systematic Generalization in Grounded Language Understanding." Neural Information Processing Systems, 2020.Markdown
[Ruis et al. "A Benchmark for Systematic Generalization in Grounded Language Understanding." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/ruis2020neurips-benchmark/)BibTeX
@inproceedings{ruis2020neurips-benchmark,
title = {{A Benchmark for Systematic Generalization in Grounded Language Understanding}},
author = {Ruis, Laura and Andreas, Jacob and Baroni, Marco and Bouchacourt, Diane and Lake, Brenden M},
booktitle = {Neural Information Processing Systems},
year = {2020},
url = {https://mlanthology.org/neurips/2020/ruis2020neurips-benchmark/}
}