Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning

Abstract

We model hippocampal place cells and head-direction cells by combin(cid:173) ing allothetic (visual) and idiothetic (proprioceptive) stimuli. Visual in(cid:173) put, provided by a video camera on a miniature robot, is preprocessed by a set of Gabor filters on 31 nodes of a log-polar retinotopic graph. Unsu(cid:173) pervised Hebbian learning is employed to incrementally build a popula(cid:173) tion of localized overlapping place fields. Place cells serve as basis func(cid:173) tions for reinforcement learning. Experimental results for goal-oriented navigation of a mobile robot are presented.

Cite

Text

Arleo et al. "Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning." Neural Information Processing Systems, 2000.

Markdown

[Arleo et al. "Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning." Neural Information Processing Systems, 2000.](https://mlanthology.org/neurips/2000/arleo2000neurips-place/)

BibTeX

@inproceedings{arleo2000neurips-place,
  title     = {{Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning}},
  author    = {Arleo, Angelo and Smeraldi, Fabrizio and Hug, Stéphane and Gerstner, Wulfram},
  booktitle = {Neural Information Processing Systems},
  year      = {2000},
  pages     = {89-95},
  url       = {https://mlanthology.org/neurips/2000/arleo2000neurips-place/}
}