FlowNet3D++: Geometric Losses for Deep Scene Flow Estimation

Abstract

We present FlowNet3D++, a deep scene flow estimation network. Inspired by classical methods, FlowNet3D++ incorporates geometric constraints in the form of point-toplane distance and angular alignment between individual vectors in the flow field, into FlowNet3D. We demonstrate that the addition of these geometric loss terms improves the previous state-of-art FlowNet3D accuracy from 57.85% to 63.43%. To further demonstrate the effectiveness of our geometric constraints, we propose a benchmark for flow estimation on the task of dynamic 3D reconstruction, thus providing a more holistic and practical measure of performance than the breakdown of individual metrics previously used to evaluate scene flow. This is made possible through the contribution of a novel pipeline to integrate point-based scene flow predictions into a global dense volume. FlowNet3D++ achieves up to a 15.0% reduction in reconstruction error over FlowNet3D, and up to a 35.2% improvement over KillingFusion alone. We will release our scene flow estimation code later.

Cite

Text

Wang et al. "FlowNet3D++: Geometric Losses for Deep Scene Flow Estimation." Winter Conference on Applications of Computer Vision, 2020.

Markdown

[Wang et al. "FlowNet3D++: Geometric Losses for Deep Scene Flow Estimation." Winter Conference on Applications of Computer Vision, 2020.](https://mlanthology.org/wacv/2020/wang2020wacv-flownet3d/)

BibTeX

@inproceedings{wang2020wacv-flownet3d,
  title     = {{FlowNet3D++: Geometric Losses for Deep Scene Flow Estimation}},
  author    = {Wang, Zirui and Li, Shuda and Howard-Jenkins, Henry and Prisacariu, Victor and Chen, Min},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2020},
  url       = {https://mlanthology.org/wacv/2020/wang2020wacv-flownet3d/}
}