Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving

Abstract

Deep learning has enabled impressive progress in the accuracy of semantic segmentation. Yet, the ability to estimate uncertainty and detect anomalies is key for safety-critical applications like autonomous driving. Existing uncertainty estimates have mostly been evaluated on simple tasks, and it is unclear whether these methods generalize to more complex scenarios. We present Fishyscapes, the first public benchmark for uncertainty estimation in the real-world task of semantic segmentation for urban driving. It evaluates pixel-wise uncertainty estimates towards the detection of anomalous objects in front of the vehicle. We adapt state-of-the-art methods to recent semantic segmentation models and compare approaches based on softmax confidence, Bayesian learning, and embedding density. Our results show that anomaly detection is far from solved even for ordinary situations, while our benchmark allows measuring advancements beyond the state-of-the-art.

Cite

Text

Blum et al. "Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00294

Markdown

[Blum et al. "Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/blum2019iccvw-fishyscapes/) doi:10.1109/ICCVW.2019.00294

BibTeX

@inproceedings{blum2019iccvw-fishyscapes,
  title     = {{Fishyscapes: A Benchmark for Safe Semantic Segmentation in Autonomous Driving}},
  author    = {Blum, Hermann and Sarlin, Paul-Edouard and Nieto, Juan I. and Siegwart, Roland and Cadena, Cesar},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2019},
  pages     = {2403-2412},
  doi       = {10.1109/ICCVW.2019.00294},
  url       = {https://mlanthology.org/iccvw/2019/blum2019iccvw-fishyscapes/}
}