Visual Representations Derived from Multiplicative Interactions
Abstract
Biological sensory systems appear to rely on canonical nonlinear computations that can be readily adapted to a broad range of representational objectives. Here we test the hypothesis that one such computation—multiplicative interaction—is a pervasive nonlinearity that underlies the representational transformations in human vision. We computed local multiplicative interactions of features in several classes of convolutional models and used the resulting representations to predict object-evoked responses in voxelwise models of human fMRI data. We found that multiplicative interactions predicted widespread representations throughout the ventral stream and were competitive with state-of-the-art supervised deep nets. Surprisingly, the performance of multiplicative interactions did not require supervision and could be achieved even with random or hand-engineered convolutional filters. These findings suggest that multiplicative interaction may be a canonical computation for feature transformations in human vision.
Cite
Text
Elmoznino and Bonner. "Visual Representations Derived from Multiplicative Interactions." NeurIPS 2020 Workshops: SVRHM, 2020.Markdown
[Elmoznino and Bonner. "Visual Representations Derived from Multiplicative Interactions." NeurIPS 2020 Workshops: SVRHM, 2020.](https://mlanthology.org/neuripsw/2020/elmoznino2020neuripsw-visual/)BibTeX
@inproceedings{elmoznino2020neuripsw-visual,
title = {{Visual Representations Derived from Multiplicative Interactions}},
author = {Elmoznino, Eric and Bonner, Michael},
booktitle = {NeurIPS 2020 Workshops: SVRHM},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/elmoznino2020neuripsw-visual/}
}