Object Scene Flow for Autonomous Vehicles

Abstract

This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods.

Cite

Text

Menze and Geiger. "Object Scene Flow for Autonomous Vehicles." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298925

Markdown

[Menze and Geiger. "Object Scene Flow for Autonomous Vehicles." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/menze2015cvpr-object/) doi:10.1109/CVPR.2015.7298925

BibTeX

@inproceedings{menze2015cvpr-object,
  title     = {{Object Scene Flow for Autonomous Vehicles}},
  author    = {Menze, Moritz and Geiger, Andreas},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2015},
  doi       = {10.1109/CVPR.2015.7298925},
  url       = {https://mlanthology.org/cvpr/2015/menze2015cvpr-object/}
}