Segment Every Out-of-Distribution Object

Abstract

Semantic segmentation models while effective for in-distribution categories face challenges in real-world deployment due to encountering out-of-distribution (OoD) objects. Detecting these OoD objects is crucial for safety-critical applications. Existing methods rely on anomaly scores but choosing a suitable threshold for generating masks presents difficulties and can lead to fragmentation and inaccuracy. This paper introduces a method to convert anomaly Score To segmentation Mask called S2M a simple and effective framework for OoD detection in semantic segmentation. Unlike assigning anomaly scores to pixels S2M directly segments the entire OoD object. By transforming anomaly scores into prompts for a promptable segmentation model S2M eliminates the need for threshold selection. Extensive experiments demonstrate that S2M outperforms the state-of-the-art by approximately 20% in IoU and 40% in mean F1 score on average across various benchmarks including Fishyscapes Segment-Me-If-You-Can and RoadAnomaly datasets.

Cite

Text

Zhao et al. "Segment Every Out-of-Distribution Object." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.00375

Markdown

[Zhao et al. "Segment Every Out-of-Distribution Object." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/zhao2024cvpr-segment/) doi:10.1109/CVPR52733.2024.00375

BibTeX

@inproceedings{zhao2024cvpr-segment,
  title     = {{Segment Every Out-of-Distribution Object}},
  author    = {Zhao, Wenjie and Li, Jia and Dong, Xin and Xiang, Yu and Guo, Yunhui},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {3910-3920},
  doi       = {10.1109/CVPR52733.2024.00375},
  url       = {https://mlanthology.org/cvpr/2024/zhao2024cvpr-segment/}
}