Deep Predictive Coding Networks

Abstract

The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative model that empirically alters priors on the latent representations in a dynamic and context-sensitive manner. This model captures the temporal dependencies in time-varying signals and uses top-down information to modulate the representation in lower layers. The centerpiece of our model is a novel procedure to infer sparse states of a dynamic model which is used for feature extraction. We also extend this feature extraction block to introduce a pooling function that captures locally invariant representations. When applied on a natural video data, we show that our method is able to learn high-level visual features. We also demonstrate the role of the top-down connections by showing the robustness of the proposed model to structured noise.

Cite

Text

Chalasani and Príncipe. "Deep Predictive Coding Networks." International Conference on Learning Representations, 2013. doi:10.48550/arxiv.1301.3541

Markdown

[Chalasani and Príncipe. "Deep Predictive Coding Networks." International Conference on Learning Representations, 2013.](https://mlanthology.org/iclr/2013/chalasani2013iclr-deep/) doi:10.48550/arxiv.1301.3541

BibTeX

@inproceedings{chalasani2013iclr-deep,
  title     = {{Deep Predictive Coding Networks}},
  author    = {Chalasani, Rakesh and Príncipe, José C.},
  booktitle = {International Conference on Learning Representations},
  year      = {2013},
  doi       = {10.48550/arxiv.1301.3541},
  url       = {https://mlanthology.org/iclr/2013/chalasani2013iclr-deep/}
}