Interpreting CNN Knowledge via an Explanatory Graph
Abstract
This paper learns a graphical model, namely an explanatory graph, which reveals the knowledge hierarchy hidden inside a pre-trained CNN. Considering that each filter in a conv-layer of a pre-trained CNN usually represents a mixture of object parts, we propose a simple yet efficient method to automatically disentangles different part patterns from each filter, and construct an explanatory graph. In the explanatory graph, each node represents a part pattern, and each edge encodes co-activation relationships and spatial relationships between patterns. More importantly, we learn the explanatory graph for a pre-trained CNN in an unsupervised manner, i.e., without a need of annotating object parts. Experiments show that each graph node consistently represents the same object part through different images. We transfer part patterns in the explanatory graph to the task of part localization, and our method significantly outperforms other approaches.
Cite
Text
Zhang et al. "Interpreting CNN Knowledge via an Explanatory Graph." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11819Markdown
[Zhang et al. "Interpreting CNN Knowledge via an Explanatory Graph." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/zhang2018aaai-interpreting/) doi:10.1609/AAAI.V32I1.11819BibTeX
@inproceedings{zhang2018aaai-interpreting,
title = {{Interpreting CNN Knowledge via an Explanatory Graph}},
author = {Zhang, Quanshi and Cao, Ruiming and Shi, Feng and Wu, Ying Nian and Zhu, Song-Chun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {4454-4463},
doi = {10.1609/AAAI.V32I1.11819},
url = {https://mlanthology.org/aaai/2018/zhang2018aaai-interpreting/}
}