Vector Quantization in the Brain: Grid-like Codes in World Models

Abstract

We propose Grid-like Code Quantization (GCQ), a brain-inspired method for compressing observation-action sequences into discrete representations using grid-like patterns in attractor dynamics. Unlike conventional vector quantization approaches that operate on static inputs, GCQ performs spatiotemporal compression through an action-conditioned codebook, where codewords are derived from continuous attractor neural networks and dynamically selected based on actions. This enables GCQ to jointly compress space and time, serving as a unified world model. The resulting representation supports long-horizon prediction, goal-directed planning, and inverse modeling. Experiments across diverse tasks demonstrate GCQ's effectiveness in compact encoding and downstream performance. Our work offers both a computational tool for efficient sequence modeling and a theoretical perspective on the formation of grid-like codes in neural systems.

Cite

Text

Peng et al. "Vector Quantization in the Brain: Grid-like Codes in World Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Peng et al. "Vector Quantization in the Brain: Grid-like Codes in World Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/peng2025neurips-vector/)

BibTeX

@inproceedings{peng2025neurips-vector,
  title     = {{Vector Quantization in the Brain: Grid-like Codes in World Models}},
  author    = {Peng, Xiangyuan and Dong, Xingsi and Wu, Si},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/peng2025neurips-vector/}
}