Deep Predictive Coding Network for Object Recognition

Abstract

Based on the predictive coding theory in neuro- science, we designed a bi-directional and recur- rent neural net, namely deep predictive coding networks (PCN), that has feedforward, feedback, and recurrent connections. Feedback connections from a higher layer carry the prediction of its lower-layer representation; feedforward connec- tions carry the prediction errors to its higher-layer. Given image input, PCN runs recursive cycles of bottom-up and top-down computation to update its internal representations and reduce the differ- ence between bottom-up input and top-down pre- diction at every layer. After multiple cycles of recursive updating, the representation is used for image classification. With benchmark datasets (CIFAR-10/100, SVHN, and MNIST), PCN was found to always outperform its feedforward-only counterpart: a model without any mechanism for recurrent dynamics, and its performance tended to improve given more cycles of computation over time. In short, PCN reuses a single architecture to recursively run bottom-up and top-down pro- cesses to refine its representation towards more accurate and definitive object recognition.

Cite

Text

Wen et al. "Deep Predictive Coding Network for Object Recognition." International Conference on Machine Learning, 2018.

Markdown

[Wen et al. "Deep Predictive Coding Network for Object Recognition." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/wen2018icml-deep/)

BibTeX

@inproceedings{wen2018icml-deep,
  title     = {{Deep Predictive Coding Network for Object Recognition}},
  author    = {Wen, Haiguang and Han, Kuan and Shi, Junxing and Zhang, Yizhen and Culurciello, Eugenio and Liu, Zhongming},
  booktitle = {International Conference on Machine Learning},
  year      = {2018},
  pages     = {5266-5275},
  volume    = {80},
  url       = {https://mlanthology.org/icml/2018/wen2018icml-deep/}
}