Cross-Layer Distillation with Semantic Calibration
Abstract
Recently proposed knowledge distillation approaches based on feature-map transfer validate that intermediate layers of a teacher model can serve as effective targets for training a student model to obtain better generalization ability. Existing studies mainly focus on particular representation forms for knowledge transfer between manually specified pairs of teacher-student intermediate layers. However, semantics of intermediate layers may vary in different networks and manual association of layers might lead to negative regularization caused by semantic mismatch between certain teacher-student layer pairs. To address this problem, we propose Semantic Calibration for Cross-layer Knowledge Distillation (SemCKD), which automatically assigns proper target layers of the teacher model for each student layer with an attention mechanism. With a learned attention distribution, each student layer distills knowledge contained in multiple layers rather than a single fixed intermediate layer from the teacher model for appropriate cross-layer supervision in training. Consistent improvements over state-of-the-art approaches are observed in extensive experiments with various network architectures for teacher and student models, demonstrating the effectiveness and flexibility of the proposed attention based soft layer association mechanism for cross-layer distillation.
Cite
Text
Chen et al. "Cross-Layer Distillation with Semantic Calibration." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I8.16865Markdown
[Chen et al. "Cross-Layer Distillation with Semantic Calibration." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/chen2021aaai-cross/) doi:10.1609/AAAI.V35I8.16865BibTeX
@inproceedings{chen2021aaai-cross,
title = {{Cross-Layer Distillation with Semantic Calibration}},
author = {Chen, Defang and Mei, Jian-Ping and Zhang, Yuan and Wang, Can and Wang, Zhe and Feng, Yan and Chen, Chun},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {7028-7036},
doi = {10.1609/AAAI.V35I8.16865},
url = {https://mlanthology.org/aaai/2021/chen2021aaai-cross/}
}