Occlusion Guided Scene Flow Estimation on 3D Point Clouds
Abstract
3D scene flow estimation is a vital tool in perceiving our environment given depth or range sensors. Unlike optical flow, the data is usually sparse and in most cases partially occluded in between two temporal samplings. Here we propose a new scene flow architecture called OGSF-Net which tightly couples the learning for both flow and occlusions between frames. Their coupled symbiosis results in a more accurate prediction of flow in space. Unlike a traditional multi-action network, our unified approach is fused throughout the network, boosting performances for both occlusion detection and flow estimation. Our architecture is the first to gauge the occlusion in 3D scene flow estimation on point clouds. In key datasets such as Flyingthings3D and KITTI, we achieve the state-of-the-art results.1 2
Cite
Text
Ouyang and Raviv. "Occlusion Guided Scene Flow Estimation on 3D Point Clouds." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00315Markdown
[Ouyang and Raviv. "Occlusion Guided Scene Flow Estimation on 3D Point Clouds." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/ouyang2021cvprw-occlusion/) doi:10.1109/CVPRW53098.2021.00315BibTeX
@inproceedings{ouyang2021cvprw-occlusion,
title = {{Occlusion Guided Scene Flow Estimation on 3D Point Clouds}},
author = {Ouyang, Bojun and Raviv, Dan},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2021},
pages = {2805-2814},
doi = {10.1109/CVPRW53098.2021.00315},
url = {https://mlanthology.org/cvprw/2021/ouyang2021cvprw-occlusion/}
}