Visual Scene Representation with Hierarchical Equivariant Sparse Coding
Abstract
We propose a hierarchical neural network architecture for unsupervised learning of equivariant part-whole decompositions of visual scenes. In contrast to the global equivariance of group-equivariant networks, the proposed architecture exhibits equivariance to part-whole transformations throughout the hierarchy, which we term hierarchical equivariance. The model achieves such internal representations via hierarchical Bayesian inference, which gives rise to rich bottom-up, top-down, and lateral information flows, hypothesized to underlie the mechanisms of perceptual inference in visual cortex. We demonstrate these useful properties of the model on a simple dataset of scenes with multiple objects under independent rotations and translations.
Cite
Text
Shewmake et al. "Visual Scene Representation with Hierarchical Equivariant Sparse Coding." NeurIPS 2023 Workshops: NeurReps, 2023.Markdown
[Shewmake et al. "Visual Scene Representation with Hierarchical Equivariant Sparse Coding." NeurIPS 2023 Workshops: NeurReps, 2023.](https://mlanthology.org/neuripsw/2023/shewmake2023neuripsw-visual/)BibTeX
@inproceedings{shewmake2023neuripsw-visual,
title = {{Visual Scene Representation with Hierarchical Equivariant Sparse Coding}},
author = {Shewmake, Christian A and Buracas, Domas and Lillemark, Hansen and Shin, Jinho and Bekkers, Erik J and Miolane, Nina and Olshausen, Bruno},
booktitle = {NeurIPS 2023 Workshops: NeurReps},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/shewmake2023neuripsw-visual/}
}