Systematic Generalization with Edge Transformers

Abstract

Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks. To tackle this challenge, we propose Edge Transformer, a new model that combines inspiration from Transformers and rule-based symbolic AI. The first key idea in Edge Transformers is to associate vector states with every edge, that is, with every pair of input nodes---as opposed to just every node, as it is done in the Transformer model. The second major innovation is a triangular attention mechanism that updates edge representations in a way that is inspired by unification from logic programming. We evaluate Edge Transformer on compositional generalization benchmarks in relational reasoning, semantic parsing, and dependency parsing. In all three settings, the Edge Transformer outperforms Relation-aware, Universal and classical Transformer baselines.

Cite

Text

Bergen et al. "Systematic Generalization with Edge Transformers." Neural Information Processing Systems, 2021.

Markdown

[Bergen et al. "Systematic Generalization with Edge Transformers." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/bergen2021neurips-systematic/)

BibTeX

@inproceedings{bergen2021neurips-systematic,
  title     = {{Systematic Generalization with Edge Transformers}},
  author    = {Bergen, Leon and O'Donnell, Timothy and Bahdanau, Dzmitry},
  booktitle = {Neural Information Processing Systems},
  year      = {2021},
  url       = {https://mlanthology.org/neurips/2021/bergen2021neurips-systematic/}
}