Re-Evaluating LiDAR Scene Flow

Abstract

Popular benchmarks for self-supervised LiDAR scene flow (stereoKITTI, and FlyingThings3D) have unrealistic rates of dynamic motion, unrealistic correspondences, and unrealistic sampling patterns. As a result, progress on these benchmarks is misleading and may cause researchers to focus on the wrong problems. We evaluate a suite of top methods on a suite of real-world datasets (Argoverse 2.0, Waymo, and NuScenes) and report several conclusions. First, we find that performance on stereoKITTI is negatively correlated with performance on real-world data. Second, we find that one of this task's key components -- removing the dominant ego-motion -- is better solved by classic ICP than any tested method. Finally, we show that despite the emphasis placed on learning, most performance gains are caused by pre- and post-processing steps: piecewise- rigid refinement and ground removal. We demonstrate this through a baseline method that combines these processing steps with a learning-free test-time flow optimization. This baseline outperforms every evaluated method

Cite

Text

Chodosh et al. "Re-Evaluating LiDAR Scene Flow." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Chodosh et al. "Re-Evaluating LiDAR Scene Flow." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/chodosh2024wacv-reevaluating/)

BibTeX

@inproceedings{chodosh2024wacv-reevaluating,
  title     = {{Re-Evaluating LiDAR Scene Flow}},
  author    = {Chodosh, Nathaniel and Ramanan, Deva and Lucey, Simon},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {6005-6015},
  url       = {https://mlanthology.org/wacv/2024/chodosh2024wacv-reevaluating/}
}