Inductive Biases for Relational Tasks

Abstract

Current deep learning approaches have shown good in-distribution performance but struggle in out-of-distribution settings. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as required in many intelligence tests. In contrast, our brains are remarkably flexible at such tasks, an attribute that is likely linked to anatomical constraints on computations. Inspired by this, recent work has explored how enforcing that relational representations remain distinct from sensory representations can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by ``partitioned'' representations of relations and sensory details. We investigate inductive biases that ensure abstract relations are learned and represented distinctly from sensory data across several neural network architectures and show that they outperform existing architectures on out-of-distribution generalization for various relational tasks. These results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing relational computations.

Cite

Text

Kerg et al. "Inductive Biases for Relational Tasks." ICLR 2022 Workshops: OSC, 2022.

Markdown

[Kerg et al. "Inductive Biases for Relational Tasks." ICLR 2022 Workshops: OSC, 2022.](https://mlanthology.org/iclrw/2022/kerg2022iclrw-inductive/)

BibTeX

@inproceedings{kerg2022iclrw-inductive,
  title     = {{Inductive Biases for Relational Tasks}},
  author    = {Kerg, Giancarlo and Mittal, Sarthak and Rolnick, David and Bengio, Yoshua and Richards, Blake Aaron and Lajoie, Guillaume},
  booktitle = {ICLR 2022 Workshops: OSC},
  year      = {2022},
  url       = {https://mlanthology.org/iclrw/2022/kerg2022iclrw-inductive/}
}