Knowledge Consistency Between Neural Networks and Beyond
Abstract
This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance.
Cite
Text
Liang et al. "Knowledge Consistency Between Neural Networks and Beyond." International Conference on Learning Representations, 2020.Markdown
[Liang et al. "Knowledge Consistency Between Neural Networks and Beyond." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/liang2020iclr-knowledge/)BibTeX
@inproceedings{liang2020iclr-knowledge,
title = {{Knowledge Consistency Between Neural Networks and Beyond}},
author = {Liang, Ruofan and Li, Tianlin and Li, Longfei and Wang, Jing and Zhang, Quanshi},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/liang2020iclr-knowledge/}
}