Compositional Generalization from First Principles

Abstract

Leveraging the compositional nature of our world to expedite learning and facilitate generalization is a hallmark of human perception. In machine learning, on the other hand, achieving compositional generalization has proven to be an elusive goal, even for models with explicit compositional priors. To get a better handle on compositional generalization, we here approach it from the bottom up: Inspired by identifiable representation learning, we investigate compositionality as a property of the data-generating process rather than the data itself. This reformulation enables us to derive mild conditions on only the support of the training distribution and the model architecture, which are sufficient for compositional generalization. We further demonstrate how our theoretical framework applies to real-world scenarios and validate our findings empirically. Our results set the stage for a principled theoretical study of compositional generalization.

Cite

Text

Wiedemer et al. "Compositional Generalization from First Principles." Neural Information Processing Systems, 2023.

Markdown

[Wiedemer et al. "Compositional Generalization from First Principles." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wiedemer2023neurips-compositional/)

BibTeX

@inproceedings{wiedemer2023neurips-compositional,
  title     = {{Compositional Generalization from First Principles}},
  author    = {Wiedemer, Thaddäus and Mayilvahanan, Prasanna and Bethge, Matthias and Brendel, Wieland},
  booktitle = {Neural Information Processing Systems},
  year      = {2023},
  url       = {https://mlanthology.org/neurips/2023/wiedemer2023neurips-compositional/}
}