Bridging the Gap to Real-World Object-Centric Learning

Abstract

Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world. Allowing machine learning algorithms to derive this decomposition in an unsupervised way has become an important line of research. However, current methods are restricted to simulated data or require additional information in the form of motion or depth in order to successfully discover objects. In this work, we overcome this limitation by showing that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way. Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data and is the first unsupervised object-centric model that scales to real world-datasets such as COCO and PASCAL VOC. DINOSAUR is conceptually simple and shows competitive performance compared to more involved pipelines from the computer vision literature.

Cite

Text

Seitzer et al. "Bridging the Gap to Real-World Object-Centric Learning." International Conference on Learning Representations, 2023.

Markdown

[Seitzer et al. "Bridging the Gap to Real-World Object-Centric Learning." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/seitzer2023iclr-bridging/)

BibTeX

@inproceedings{seitzer2023iclr-bridging,
  title     = {{Bridging the Gap to Real-World Object-Centric Learning}},
  author    = {Seitzer, Maximilian and Horn, Max and Zadaianchuk, Andrii and Zietlow, Dominik and Xiao, Tianjun and Simon-Gabriel, Carl-Johann and He, Tong and Zhang, Zheng and Schölkopf, Bernhard and Brox, Thomas and Locatello, Francesco},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/seitzer2023iclr-bridging/}
}