Compositionality Decomposed: How Do Neural Networks Generalise? (Extended Abstract)

Abstract

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality of language and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests for models that are formulated on a task-independent level. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET, apply the resulting tests to three popular sequence-to-sequence models and provide an in-depth analysis of the results.

Cite

Text

Hupkes et al. "Compositionality Decomposed: How Do Neural Networks Generalise? (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/708

Markdown

[Hupkes et al. "Compositionality Decomposed: How Do Neural Networks Generalise? (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/hupkes2020ijcai-compositionality/) doi:10.24963/IJCAI.2020/708

BibTeX

@inproceedings{hupkes2020ijcai-compositionality,
  title     = {{Compositionality Decomposed: How Do Neural Networks Generalise? (Extended Abstract)}},
  author    = {Hupkes, Dieuwke and Dankers, Verna and Mul, Mathijs and Bruni, Elia},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {5065-5069},
  doi       = {10.24963/IJCAI.2020/708},
  url       = {https://mlanthology.org/ijcai/2020/hupkes2020ijcai-compositionality/}
}