Iterative VAE as a Predictive Brain Model for Out-of-Distribution Generalization
Abstract
Our ability to generalize beyond training data to novel, out-of-distribution, image degradations is a hallmark of primate vision. The predictive brain, exemplified by predictive coding networks (PCNs), has become a prominent neuroscience theory of neural computation. Motivated by the recent successes of variational autoencoders (VAEs) in machine learning, we rigorously derive a correspondence between PCNs and VAEs. This motivates us to consider iterative extensions of VAEs (iVAEs) as plausible variational extensions of the PCNs. We further demonstrate that iVAEs generalize to distributional shifts significantly better than both PCNs and VAEs. In addition, we propose a novel measure of recognizability for individual samples which can be tested against human psychophysical data. Overall, we hope this work will spur interest in iVAEs as a promising new direction for modeling in neuroscience.
Cite
Text
Boutin et al. "Iterative VAE as a Predictive Brain Model for Out-of-Distribution Generalization." NeurIPS 2020 Workshops: SVRHM, 2020.Markdown
[Boutin et al. "Iterative VAE as a Predictive Brain Model for Out-of-Distribution Generalization." NeurIPS 2020 Workshops: SVRHM, 2020.](https://mlanthology.org/neuripsw/2020/boutin2020neuripsw-iterative/)BibTeX
@inproceedings{boutin2020neuripsw-iterative,
title = {{Iterative VAE as a Predictive Brain Model for Out-of-Distribution Generalization}},
author = {Boutin, Victor and Zerroug, Aimen and Jung, Minju and Serre, Thomas},
booktitle = {NeurIPS 2020 Workshops: SVRHM},
year = {2020},
url = {https://mlanthology.org/neuripsw/2020/boutin2020neuripsw-iterative/}
}