ODSmoothGrad: Generating Saliency Maps for Object Detectors

Abstract

Techniques for generating saliency maps continue to be used for explainability of deep learning models, with efforts primarily applied to the image classification task. Such techniques, however, can also be applied to object detectors, not only with the classification scores, but also for the bounding box parameters, which are regressed values for which the relevant pixels contributing to these parameters can be identified. In this paper, we present ODSmoothGrad, a tool for generating saliency maps for the classification and the bounding box parameters in object detectors. Given the noisiness of saliency maps, we also apply the SmoothGrad algorithm [12] to visually enhance the pixels of interest. We demonstrate these capabilities on one-stage and two-stage object detectors, with comparisons using classifier-based techniques.

Cite

Text

Gwon and Howell. "ODSmoothGrad: Generating Saliency Maps for Object Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023. doi:10.1109/CVPRW59228.2023.00376

Markdown

[Gwon and Howell. "ODSmoothGrad: Generating Saliency Maps for Object Detectors." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2023.](https://mlanthology.org/cvprw/2023/gwon2023cvprw-odsmoothgrad/) doi:10.1109/CVPRW59228.2023.00376

BibTeX

@inproceedings{gwon2023cvprw-odsmoothgrad,
  title     = {{ODSmoothGrad: Generating Saliency Maps for Object Detectors}},
  author    = {Gwon, Chul and Howell, Steven C.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2023},
  pages     = {3686-3690},
  doi       = {10.1109/CVPRW59228.2023.00376},
  url       = {https://mlanthology.org/cvprw/2023/gwon2023cvprw-odsmoothgrad/}
}