Benchmarking Sampling-Based Probabilistic Object Detectors
Abstract
This paper provides the first benchmark for sampling- based probabilistic object detectors. A probabilistic object detector expresses uncertainty for all detections that reliably indicates object localisation and classification performance. We compare performance for two sampling-based uncertainty techniques, namely Monte Carlo Dropout and Deep Ensembles, when implemented into one-stage and two-stage object detectors, Single Shot MultiBox Detector and Faster R-CNN. Our results show that Deep Ensembles outperform MC Dropout for both types of detectors. We also introduce a new merging strategy for sampling-based techniques and one-stage object detectors. We show this novel merging strategy has competitive performance with previously established strategies, while only having one free parameter.
Cite
Text
Miller et al. "Benchmarking Sampling-Based Probabilistic Object Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.Markdown
[Miller et al. "Benchmarking Sampling-Based Probabilistic Object Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/miller2019cvprw-benchmarking/)BibTeX
@inproceedings{miller2019cvprw-benchmarking,
title = {{Benchmarking Sampling-Based Probabilistic Object Detectors}},
author = {Miller, Dimity and Sünderhauf, Niko and Zhang, Haoyang and Hall, David and Dayoub, Feras},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {42-45},
url = {https://mlanthology.org/cvprw/2019/miller2019cvprw-benchmarking/}
}