Grounded Object-Centric Learning

Abstract

The extraction of object-centric representations for downstream tasks is an emerging area of research. Learning grounded representations of objects that are guaranteed to be stable and invariant promises robust performance across different tasks and environments. Slot Attention (SA) learns object-centric representations by assigning objects to *slots*, but presupposes a *single* distribution from which all slots are randomly initialised. This results in an inability to learn *specialized* slots which bind to specific object types and remain invariant to identity-preserving changes in object appearance. To address this, we present *Conditional Slot Attention* (CoSA) using a novel concept of *Grounded Slot Dictionary* (GSD) inspired by vector quantization. Our proposed GSD comprises (i) canonical object-level property vectors and (ii) parametric Gaussian distributions, which define a prior over the slots. We demonstrate the benefits of our method in multiple downstream tasks such as scene generation, composition, and task adaptation, whilst remaining competitive with SA in object discovery.

Cite

Text

Kori et al. "Grounded Object-Centric Learning." International Conference on Learning Representations, 2024.

Markdown

[Kori et al. "Grounded Object-Centric Learning." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/kori2024iclr-grounded/)

BibTeX

@inproceedings{kori2024iclr-grounded,
  title     = {{Grounded Object-Centric Learning}},
  author    = {Kori, Avinash and Locatello, Francesco and Ribeiro, Fabio De Sousa and Toni, Francesca and Glocker, Ben},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/kori2024iclr-grounded/}
}