Similarity-Preserving Neural Networks from GPLVM and Information Theory
Abstract
This work proposes a way of deriving the structure of plausible canonical microcircuit models, replete with feedforward, lateral, and feedback connections, out of information-theoretic considerations. The resulting circuits show biologically plausible features, such as being trainable online and having local synaptic update rules reminiscent of the Hebbian principle. Our work achieves these goals by rephrasing Gaussian Process Latent Variable Models as a special case of the more recently developed similarity matching framework. One remarkable aspect of the resulting network is the role of lateral interactions in preventing overfitting. Overall, our study emphasizes the importance of recurrent connections in neural networks, both for cognitive tasks in the brain and applications to artificial intelligence.
Cite
Text
Bahroun et al. "Similarity-Preserving Neural Networks from GPLVM and Information Theory." NeurIPS 2022 Workshops: InfoCog, 2022.Markdown
[Bahroun et al. "Similarity-Preserving Neural Networks from GPLVM and Information Theory." NeurIPS 2022 Workshops: InfoCog, 2022.](https://mlanthology.org/neuripsw/2022/bahroun2022neuripsw-similaritypreserving/)BibTeX
@inproceedings{bahroun2022neuripsw-similaritypreserving,
title = {{Similarity-Preserving Neural Networks from GPLVM and Information Theory}},
author = {Bahroun, Yanis and Acharya, Atithi and Chklovskii, Dmitri and Sengupta, Anirvan M.},
booktitle = {NeurIPS 2022 Workshops: InfoCog},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/bahroun2022neuripsw-similaritypreserving/}
}