Brain-Inspired Robust Vision Using Convolutional Neural Networks with Feedback

Abstract

Humans have the remarkable ability to correctly classify images despite possible degradation. Many studies have suggested that this hallmark of human vision results from the interaction between feedforward signals from bottom-up pathways of the visual cortex and feedback signals provided by top-down pathways. Motivated by such interaction, we propose a new neuro-inspired model, namely Convolutional Neural Networks with Feedback (CNN-F). CNN-F extends CNN with a feedback generative network, combining bottom-up and top-down inference to perform approximate loopy belief propagation. We show that CNN-F's iterative inference allows for disentanglement of latent variables across layers. We validate the advantages of CNN-F over the baseline CNN. Our experimental results suggest that the CNN-F is more robust to image degradation such as pixel noise, occlusion, and blur. Furthermore, we show that the CNN-F is capable of restoring original images from the degraded ones with high reconstruction accuracy while introducing negligible artifacts.

Cite

Text

Huang et al. "Brain-Inspired Robust Vision Using Convolutional Neural Networks with Feedback." NeurIPS 2019 Workshops: Neuro_AI, 2019.

Markdown

[Huang et al. "Brain-Inspired Robust Vision Using Convolutional Neural Networks with Feedback." NeurIPS 2019 Workshops: Neuro_AI, 2019.](https://mlanthology.org/neuripsw/2019/huang2019neuripsw-braininspired/)

BibTeX

@inproceedings{huang2019neuripsw-braininspired,
  title     = {{Brain-Inspired Robust Vision Using Convolutional Neural Networks with Feedback}},
  author    = {Huang, Yujia and Dai, Sihui and Nguyen, Tan and Bao, Pinglei and Tsao, Doris Y. and Baraniuk, Richard G. and Anandkumar, Anima},
  booktitle = {NeurIPS 2019 Workshops: Neuro_AI},
  year      = {2019},
  url       = {https://mlanthology.org/neuripsw/2019/huang2019neuripsw-braininspired/}
}