Knowledge Distillation via the Target-Aware Transformer

Abstract

Knowledge distillation becomes a de facto standard to improve the performance of small neural networks. Most of the previous works propose to regress the representational features from the teacher to the student in a one-to-one spatial matching fashion. However, people tend to overlook the fact that, due to the architecture differences, the semantic information on the same spatial location usually vary. This greatly undermines the underlying assumption of the one-to-one distillation approach. To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. Specifically, we allow each pixel of the teacher feature to be distilled to all spatial locations of the student features given its similarity, which is generated from a target-aware transformer. Our approach surpasses the state-of-the-art methods by a significant margin on various computer vision benchmarks, such as ImageNet, Pascal VOC and COCOStuff10k. Code is available at https://github.com/sihaoevery/TaT.

Cite

Text

Lin et al. "Knowledge Distillation via the Target-Aware Transformer." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.01064

Markdown

[Lin et al. "Knowledge Distillation via the Target-Aware Transformer." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/lin2022cvpr-knowledge/) doi:10.1109/CVPR52688.2022.01064

BibTeX

@inproceedings{lin2022cvpr-knowledge,
  title     = {{Knowledge Distillation via the Target-Aware Transformer}},
  author    = {Lin, Sihao and Xie, Hongwei and Wang, Bing and Yu, Kaicheng and Chang, Xiaojun and Liang, Xiaodan and Wang, Gang},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {10915-10924},
  doi       = {10.1109/CVPR52688.2022.01064},
  url       = {https://mlanthology.org/cvpr/2022/lin2022cvpr-knowledge/}
}