Learning Optical Flow from Continuous Spike Streams

Abstract

Spike camera is an emerging bio-inspired vision sensor with ultra-high temporal resolution. It records scenes by accumulating photons and outputting continuous binary spike streams. Optical flow is a key task for spike cameras and their applications. A previous attempt has been made for spike-based optical flow. However, the previous work only focuses on motion between two moments, and it uses graphics-based data for training, whose generalization is limited. In this paper, we propose a tailored network, Spike2Flow that extracts information from binary spikes with temporal-spatial representation based on the differential of spike firing time and spatial information aggregation. The network utilizes continuous motion clues through joint correlation decoding. Besides, a new dataset with real-world scenes is proposed for better generalization. Experimental results show that our approach achieves state-of-the-art performance on existing synthetic datasets and real data captured by spike cameras. The source code and dataset are available at \url{https://github.com/ruizhao26/Spike2Flow}.

Cite

Text

Zhao et al. "Learning Optical Flow from Continuous Spike Streams." Neural Information Processing Systems, 2022.

Markdown

[Zhao et al. "Learning Optical Flow from Continuous Spike Streams." Neural Information Processing Systems, 2022.](https://mlanthology.org/neurips/2022/zhao2022neurips-learning/)

BibTeX

@inproceedings{zhao2022neurips-learning,
  title     = {{Learning Optical Flow from Continuous Spike Streams}},
  author    = {Zhao, Rui and Xiong, Ruiqin and Zhao, Jing and Yu, Zhaofei and Fan, Xiaopeng and Huang, Tiejun},
  booktitle = {Neural Information Processing Systems},
  year      = {2022},
  url       = {https://mlanthology.org/neurips/2022/zhao2022neurips-learning/}
}