Chunking Space and Time with Information Geometry
Abstract
Humans are exposed to a continuous stream of sensory data, yet understand the world in terms of discrete concepts. A large body of work has focused on chunking sensory data in time, i.e. finding event boundaries, typically identified by model prediction errors. Similarly, chucking sensory data in space is the problem at hand when building spatial maps for navigation. In this work, we argue that a single mechanism underlies both, which is building a hierarchical generative model of perception and action, where chunks at a higher level are formed by segments surpassing a certain information distance at the level below. We demonstrate how this can work in the case of robot navigation, and discuss how this could relate to human cognition in general.
Cite
Text
Verbelen et al. "Chunking Space and Time with Information Geometry." NeurIPS 2022 Workshops: InfoCog, 2022.Markdown
[Verbelen et al. "Chunking Space and Time with Information Geometry." NeurIPS 2022 Workshops: InfoCog, 2022.](https://mlanthology.org/neuripsw/2022/verbelen2022neuripsw-chunking/)BibTeX
@inproceedings{verbelen2022neuripsw-chunking,
title = {{Chunking Space and Time with Information Geometry}},
author = {Verbelen, Tim and de Tinguy, Daria and Mazzaglia, Pietro and Catal, Ozan and Safron, Adam},
booktitle = {NeurIPS 2022 Workshops: InfoCog},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/verbelen2022neuripsw-chunking/}
}