A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions

Abstract

Motion estimation is one of the core challenges in computer vision. With traditional dual-frame approaches, occlusions and out-of-view motions are a limiting factor, especially in the context of environmental perception for vehicles due to the large (ego-) motion of objects. Our work proposes a novel data-driven approach for temporal fusion of scene flow estimates in a multi-frame setup to overcome the issue of occlusion. Contrary to most previous methods, we do not rely on a constant motion model, but instead learn a generic temporal relation of motion from data. In a second step, a neural network combines bi-directional scene flow estimates from a common reference frame, yielding a refined estimate and a natural byproduct of occlusion masks. This way, our approach provides a fast multi-frame extension for a variety of scene flow estimators, which outperforms the underlying dual-frame approaches.

Cite

Text

Schuster et al. "A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions." Winter Conference on Applications of Computer Vision, 2021.

Markdown

[Schuster et al. "A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/schuster2021wacv-deep/)

BibTeX

@inproceedings{schuster2021wacv-deep,
  title     = {{A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions}},
  author    = {Schuster, Rene and Unger, Christian and Stricker, Didier},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2021},
  pages     = {247-255},
  url       = {https://mlanthology.org/wacv/2021/schuster2021wacv-deep/}
}