Illiterate DALL-E Learns to Compose

Abstract

Although DALL-E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided by the text. In contrast, object-centric representation models like the Slot Attention model learn composable representations without the text prompt. However, unlike DALL-E, its ability to systematically generalize for zero-shot generation is significantly limited. In this paper, we propose a simple but novel slot-based autoencoding architecture, called SLATE, for combining the best of both worlds: learning object-centric representations that allow systematic generalization in zero-shot image generation without text. As such, this model can also be seen as an illiterate DALL-E model. Unlike the pixel-mixture decoders of existing object-centric representation models, we propose to use the Image GPT decoder conditioned on the slots for capturing complex interactions among the slots and pixels. In experiments, we show that this simple and easy-to-implement architecture not requiring a text prompt achieves significant improvement in in-distribution and out-of-distribution (zero-shot) image generation and qualitatively comparable or better slot-attention structure than the models based on mixture decoders.

Cite

Text

Singh et al. "Illiterate DALL-E Learns to Compose." International Conference on Learning Representations, 2022.

Markdown

[Singh et al. "Illiterate DALL-E Learns to Compose." International Conference on Learning Representations, 2022.](https://mlanthology.org/iclr/2022/singh2022iclr-illiterate/)

BibTeX

@inproceedings{singh2022iclr-illiterate,
  title     = {{Illiterate DALL-E Learns to Compose}},
  author    = {Singh, Gautam and Deng, Fei and Ahn, Sungjin},
  booktitle = {International Conference on Learning Representations},
  year      = {2022},
  url       = {https://mlanthology.org/iclr/2022/singh2022iclr-illiterate/}
}