Uncertainty as a Criterion for SOTIF Evaluation of Deep Learning Models in Autonomous Driving Systems

Abstract

Ensuring the safety of deep learning models in autonomous driving systems is crucial. In compliance with the automotive safety standard ISO 21448, we propose uncertainty as a new complementary evaluation criterion to ensure the safety of the intended functionality (SOTIF) of deep learning-based systems. To evaluate and improve the trajectory prediction function of autonomous driving systems, we utilize epistemic uncertainty, quantified by a single forward pass model with consideration for constraints on resources and response time, as a criterion. Experimental results with data collected from the CARLA simulator demonstrate that uncertainty criterion can detect functional insuffiencies in unknown driving scenarios which potentially hazardous, and eventually induce extra learning.

Cite

Text

Suk and Kim. "Uncertainty as a Criterion for SOTIF Evaluation of Deep Learning Models in Autonomous Driving Systems." NeurIPS 2024 Workshops: BDU, 2024.

Markdown

[Suk and Kim. "Uncertainty as a Criterion for SOTIF Evaluation of Deep Learning Models in Autonomous Driving Systems." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/suk2024neuripsw-uncertainty/)

BibTeX

@inproceedings{suk2024neuripsw-uncertainty,
  title     = {{Uncertainty as a Criterion for SOTIF Evaluation of Deep Learning Models in Autonomous Driving Systems}},
  author    = {Suk, Ho and Kim, Shiho},
  booktitle = {NeurIPS 2024 Workshops: BDU},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/suk2024neuripsw-uncertainty/}
}