Implementing Planning KL-Divergence

Abstract

Variants of accuracy and precision are the gold-standard by which the computer vision community measures progress of perception algorithms. One reason for the ubiquity of these metrics is that they are largely task-agnostic; we in general seek to detect zero false negatives or positives. The downside of these metrics is that, at worst, they penalize all incorrect detections equally without conditioning on the task or scene, and at best, heuristics need to be chosen to ensure that different mistakes count differently. In this paper, we revisit “Planning KL-Divergence” , a principled metric for 3D object detection specifically for the task of self-driving. The core idea behind PKL is to isolate the task of object detection and measure the impact the produced detections would induce on the downstream task of driving. We summarize functionality provided by our python package planning-centric-metrics that implements PKL. nuScenes is in the process of incorporating PKL into their detection leaderboard and we hope that the convenience of our implementation encourages other leaderboards to follow suit.

Cite

Text

Philion et al. "Implementing Planning KL-Divergence." European Conference on Computer Vision Workshops, 2020. doi:10.1007/978-3-030-65414-6_2

Markdown

[Philion et al. "Implementing Planning KL-Divergence." European Conference on Computer Vision Workshops, 2020.](https://mlanthology.org/eccvw/2020/philion2020eccvw-implementing/) doi:10.1007/978-3-030-65414-6_2

BibTeX

@inproceedings{philion2020eccvw-implementing,
  title     = {{Implementing Planning KL-Divergence}},
  author    = {Philion, Jonah and Kar, Amlan and Fidler, Sanja},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2020},
  pages     = {11-18},
  doi       = {10.1007/978-3-030-65414-6_2},
  url       = {https://mlanthology.org/eccvw/2020/philion2020eccvw-implementing/}
}