Generalized Predictive Coding: Bayesian Inference in Static and Dynamic Models
Abstract
Predictive coding networks (PCNs) have an inherent degree of biological plausibility and can perform approximate backpropagation of error in supervised learning settings. However, it is less clear how predictive coding compares to state-of-the-art architectures, such as VAEs, in unsupervised and probabilistic settings. We propose a PCN that, inspired by generalized predictive coding in neuroscience, parameterizes hierarchical distributions of latent states under the Laplace approximation and maximises model evidence via iterative inference using locally computed error signals. Unlike its inspiration it uses multi-layer neural networks with nonlinearities between latent distributions. We compare our model to VAE and VLAE baselines on three different image datasets and find that generalized predictive coding shows performance comparable to variational autoencoders trained with exact error backpropagation. Finally, we investigate the possibility of learning temporal dynamics via static prediction by encoding sequential observations in generalized coordinates of motion.
Cite
Text
Ofner et al. "Generalized Predictive Coding: Bayesian Inference in Static and Dynamic Models." NeurIPS 2022 Workshops: SVRHM, 2022.Markdown
[Ofner et al. "Generalized Predictive Coding: Bayesian Inference in Static and Dynamic Models." NeurIPS 2022 Workshops: SVRHM, 2022.](https://mlanthology.org/neuripsw/2022/ofner2022neuripsw-generalized/)BibTeX
@inproceedings{ofner2022neuripsw-generalized,
title = {{Generalized Predictive Coding: Bayesian Inference in Static and Dynamic Models}},
author = {Ofner, André and Millidge, Beren and Stober, Sebastian},
booktitle = {NeurIPS 2022 Workshops: SVRHM},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/ofner2022neuripsw-generalized/}
}