Spike Timing-Based Unsupervised Learning of Orientation, Disparity, and Motion Representations in a Spiking Neural Network

Abstract

Neuromorphic vision sensors present unique advantages over their frame based counterparts. However, unsupervised learning of efficient visual representations from their asynchronous output is still a challenge, requiring a re-thinking of traditional image and video processing methods. Here we present a network of leaky integrate and fire neurons that learns representations similar to those of simple and complex cells in the primary visual cortex of mammals from the input of two event-based vision sensors. Through the combination of spike timing-dependent plasticity and homeostatic mechanisms, the network learns visual feature detectors for orientation, disparity, and motion in a fully un-supervised fashion. We validate our approach on a mobile robotic platform.

Cite

Text

Barbier et al. "Spike Timing-Based Unsupervised Learning of Orientation, Disparity, and Motion Representations in a Spiking Neural Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00152

Markdown

[Barbier et al. "Spike Timing-Based Unsupervised Learning of Orientation, Disparity, and Motion Representations in a Spiking Neural Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/barbier2021cvprw-spike/) doi:10.1109/CVPRW53098.2021.00152

BibTeX

@inproceedings{barbier2021cvprw-spike,
  title     = {{Spike Timing-Based Unsupervised Learning of Orientation, Disparity, and Motion Representations in a Spiking Neural Network}},
  author    = {Barbier, Thomas and Teulière, Céline and Triesch, Jochen},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2021},
  pages     = {1377-1386},
  doi       = {10.1109/CVPRW53098.2021.00152},
  url       = {https://mlanthology.org/cvprw/2021/barbier2021cvprw-spike/}
}