Explaining Failure: Investigation of Surprise and Expectation in CNNs
Abstract
Ai Convolutional Neural Networks (CNNs) have expanded into everyday use, more rigorous methods of explaining their inner workings are required. Current popular techniques, such as saliency maps, show how a network interprets an input image at a simple level by scoring pixels according to their importance. In this paper, we introduce the concept of surprise and expectation as means for exploring and visualising how a network learns to model the training data through the understanding of filter activations. We show that this is a powerful technique for understanding how the network reacts to an unseen image compared to the training data. We also show that the insights provided by our technique allows us to "fix" misclassifica- tions. Our technique can be used with nearly all types of CNN. We evaluate our method both qualitatively and quantitatively using ImageNet.
Cite
Text
Hartley et al. "Explaining Failure: Investigation of Surprise and Expectation in CNNs." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. doi:10.1109/CVPRW50498.2020.00014Markdown
[Hartley et al. "Explaining Failure: Investigation of Surprise and Expectation in CNNs." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.](https://mlanthology.org/cvprw/2020/hartley2020cvprw-explaining/) doi:10.1109/CVPRW50498.2020.00014BibTeX
@inproceedings{hartley2020cvprw-explaining,
title = {{Explaining Failure: Investigation of Surprise and Expectation in CNNs}},
author = {Hartley, Thomas and Sidorov, Kirill A. and Willis, Christopher and Marshall, A. David},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2020},
pages = {56-65},
doi = {10.1109/CVPRW50498.2020.00014},
url = {https://mlanthology.org/cvprw/2020/hartley2020cvprw-explaining/}
}