Neuromorphic Visual Scene Understanding with Resonator Networks (in Brief)

Abstract

Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic framework that poses scene understanding as a factorization problem and uses a resonator network to extract object identities and their transformations. The framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. We also describe a hierarchical resonator network that enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition, and for rotation and scaling within the other partition. We demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes.

Cite

Text

Renner et al. "Neuromorphic Visual Scene Understanding with Resonator Networks (in Brief)." NeurIPS 2022 Workshops: NeurReps, 2022.

Markdown

[Renner et al. "Neuromorphic Visual Scene Understanding with Resonator Networks (in Brief)." NeurIPS 2022 Workshops: NeurReps, 2022.](https://mlanthology.org/neuripsw/2022/renner2022neuripsw-neuromorphic/)

BibTeX

@inproceedings{renner2022neuripsw-neuromorphic,
  title     = {{Neuromorphic Visual Scene Understanding with Resonator Networks (in Brief)}},
  author    = {Renner, Alpha and Indiveri, Giacomo and Supic, Lazar and Danielescu, Andreea and Olshausen, Bruno and Sommer, Friedrich and Sandamirskaya, Yulia and Frady, Edward Paxon},
  booktitle = {NeurIPS 2022 Workshops: NeurReps},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/renner2022neuripsw-neuromorphic/}
}