Live Demonstration: Unsupervised Event-Based Learning of Optical Flow, Depth and Egomotion

Abstract

We propose a demo of our work, Unsupervised Event-based Learning of Optical Flow, Depth and Egomotion, which will also appear at CVPR 2019. Our demo consists of a CNN which takes as input events from a DAVIS-346b event camera, represented as a discretized event volume, and predicts optical flow for each pixel in the image. Due to the generalization abilities of our network, we are able to predict accurate optical flow for a very wide range of scenes, including for very fast motions and challenging lighting.

Cite

Text

Zhu et al. "Live Demonstration: Unsupervised Event-Based Learning of Optical Flow, Depth and Egomotion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00216

Markdown

[Zhu et al. "Live Demonstration: Unsupervised Event-Based Learning of Optical Flow, Depth and Egomotion." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/zhu2019cvprw-live/) doi:10.1109/CVPRW.2019.00216

BibTeX

@inproceedings{zhu2019cvprw-live,
  title     = {{Live Demonstration: Unsupervised Event-Based Learning of Optical Flow, Depth and Egomotion}},
  author    = {Zhu, Alex Zihao and Yuan, Liangzhe and Chaney, Kenneth and Daniilidis, Kostas},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {1694},
  doi       = {10.1109/CVPRW.2019.00216},
  url       = {https://mlanthology.org/cvprw/2019/zhu2019cvprw-live/}
}