Local Geometry Constraints in V1 with Deep Recurrent Autoencoders
Abstract
Sparse coding is a pillar of computational neuroscience, learning filters that well-describe the sensitivities of mammalian simple cell receptive fields (SCRFs) in a least-squares sense. The overall distribution of SCRFs of purely sparse models, however, fail to match those found experimentally. A number of subsequent updates to overcome this problem limit the types of sparsity or else disregard the dictionary learning framework entirely. We propose a weighted $\ell_1$ penalty (WL) that maintains a qualitatively new form of sparsity, one that produces receptive field profiles that match those found in primate data by more explicitly encouraging artificial neurons to use a similar subset of dictionary basis functions. The mathematical interpretation of the penalty as a Laplacian smoothness constraint implies an early-stage form of clustering in primary cortex, suggesting how the brain may exploit manifold geometry while balancing sparse and efficient representations.
Cite
Text
Huml and Ba. "Local Geometry Constraints in V1 with Deep Recurrent Autoencoders." NeurIPS 2022 Workshops: SVRHM, 2022.Markdown
[Huml and Ba. "Local Geometry Constraints in V1 with Deep Recurrent Autoencoders." NeurIPS 2022 Workshops: SVRHM, 2022.](https://mlanthology.org/neuripsw/2022/huml2022neuripsw-local/)BibTeX
@inproceedings{huml2022neuripsw-local,
title = {{Local Geometry Constraints in V1 with Deep Recurrent Autoencoders}},
author = {Huml, Jonathan Raymond and Ba, Demba E.},
booktitle = {NeurIPS 2022 Workshops: SVRHM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/huml2022neuripsw-local/}
}