Inference Graphs for CNN Interpretation
Abstract
Convolutional neural networks (CNNs) have achieved superior accuracy in many visual related tasks. However, the inference process through intermediate layers is opaque, making it difficult to interpret such networks or develop trust in their operation. We propose to model the network hidden layers activity using probabilistic models. The activity patterns in layers of interest are modeled as Gaussian mixture models, and transition probabilities between clusters in consecutive modeled layers are estimated. Based on maximum-likelihood considerations, a subset of the nodes and paths relevant for network prediction are chosen, connected, and visualized as an inference graph. We show that such graphs are useful for understanding the general inference process of a class, as well as explaining decisions the network makes regarding specific images. In addition, the models provide an interesting observation regarding the highly local nature of column activities in top CNN layers.
Cite
Text
Konforti et al. "Inference Graphs for CNN Interpretation." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58595-2_5Markdown
[Konforti et al. "Inference Graphs for CNN Interpretation." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/konforti2020eccv-inference/) doi:10.1007/978-3-030-58595-2_5BibTeX
@inproceedings{konforti2020eccv-inference,
title = {{Inference Graphs for CNN Interpretation}},
author = {Konforti, Yael and Shpigler, Alon and Lerner, Boaz and Bar-Hillel, Aharon},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58595-2_5},
url = {https://mlanthology.org/eccv/2020/konforti2020eccv-inference/}
}