Interpreting Undesirable Pixels for Image Classification on Black-Box Models
Abstract
In an effort to interpret black-box models, researches for developing explanation methods have proceeded in recent years. Most studies have tried to identify input pixels that are crucial to the prediction of a classifier. While this approach is meaningful to analyse the characteristic of black-box models, it is also important to investigate pixels that interfere with the prediction. To tackle this issue, in this paper, we propose an explanation method that visualizes undesirable regions to classify an image as a target class. To be specific, we divide the concept of undesirable regions into two terms: (1) factors for a target class, which hinder that black-box models identify intrinsic characteristics of a target class and (2) factors for non-target classes that are important regions for an image to be classified as other classes. We visualize such undesirable regions on heatmaps to qualitatively validate the proposed method. Furthermore, we present an evaluation metric to provide quantitative results on ImageNet.
Cite
Text
Kang et al. "Interpreting Undesirable Pixels for Image Classification on Black-Box Models." IEEE/CVF International Conference on Computer Vision Workshops, 2019. doi:10.1109/ICCVW.2019.00523Markdown
[Kang et al. "Interpreting Undesirable Pixels for Image Classification on Black-Box Models." IEEE/CVF International Conference on Computer Vision Workshops, 2019.](https://mlanthology.org/iccvw/2019/kang2019iccvw-interpreting/) doi:10.1109/ICCVW.2019.00523BibTeX
@inproceedings{kang2019iccvw-interpreting,
title = {{Interpreting Undesirable Pixels for Image Classification on Black-Box Models}},
author = {Kang, Sin-Han and Jung, Honggyu and Lee, Seong-Whan},
booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
year = {2019},
pages = {4250-4254},
doi = {10.1109/ICCVW.2019.00523},
url = {https://mlanthology.org/iccvw/2019/kang2019iccvw-interpreting/}
}