Benchmarking Predictive Coding Networks -- Made Simple

Abstract

In this work, we tackle the problems of efficiency and scalability for predictive coding networks (PCNs) in machine learning. To do so, we propose a library that focuses on performance and simplicity, and use it to implement a large set of standard benchmarks for the community to use for their experiments. As most works in the field propose their own tasks and architectures, do not compare one against each other, and focus on small-scale tasks, a simple and fast open-source library, and a comprehensive set of benchmarks, would address all of these concerns. Then, we perform extensive tests on such benchmarks using both existing algorithms for PCNs, as well as adaptations of other methods popular in the bio-plausible deep learning community. All of this has allowed us to (i) test architectures much larger than commonly used in the literature, on more complex datasets; (ii) reach new state-of-the-art results in all of the tasks and dataset provided; (iii) clearly highlight what the current limitations of PCNs are, allowing us to state important future research directions. With the hope of galvanizing community efforts towards one of the main open problems in the field, scalability, we will release the code, tests, and benchmarks.

Cite

Text

Pinchetti et al. "Benchmarking Predictive Coding Networks -- Made Simple." International Conference on Learning Representations, 2025.

Markdown

[Pinchetti et al. "Benchmarking Predictive Coding Networks -- Made Simple." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/pinchetti2025iclr-benchmarking/)

BibTeX

@inproceedings{pinchetti2025iclr-benchmarking,
  title     = {{Benchmarking Predictive Coding Networks -- Made Simple}},
  author    = {Pinchetti, Luca and Qi, Chang and Lokshyn, Oleh and Emde, Cornelius and M'Charrak, Amine and Tang, Mufeng and Frieder, Simon and Menzat, Bayar and Oliviers, Gaspard and Bogacz, Rafal and Lukasiewicz, Thomas and Salvatori, Tommaso},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/pinchetti2025iclr-benchmarking/}
}