Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

Abstract

We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects - counting, locating and classifying the elements of a scene - without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network at unprecedented speed. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization.

Cite

Text

Eslami et al. "Attend, Infer, Repeat: Fast Scene Understanding with Generative Models." Neural Information Processing Systems, 2016.

Markdown

[Eslami et al. "Attend, Infer, Repeat: Fast Scene Understanding with Generative Models." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/eslami2016neurips-attend/)

BibTeX

@inproceedings{eslami2016neurips-attend,
  title     = {{Attend, Infer, Repeat: Fast Scene Understanding with Generative Models}},
  author    = {Eslami, S. M. Ali and Heess, Nicolas and Weber, Theophane and Tassa, Yuval and Szepesvari, David and Kavukcuoglu, Koray and Hinton, Geoffrey E.},
  booktitle = {Neural Information Processing Systems},
  year      = {2016},
  pages     = {3225-3233},
  url       = {https://mlanthology.org/neurips/2016/eslami2016neurips-attend/}
}