A Simple and Generic Framework for Feature Distillation via Channel-Wise Transformation

Abstract

Knowledge distillation is a popular technique for transferring knowledge from a large teacher model to a smaller student model by mimicking. However, distillation by directly aligning the feature maps between teacher and student may enforce overly strict constraints on the student thus degrading the performance of the student model. To alleviate the above feature misalignment issue, existing works mainly focus on spatially aligning the feature maps of the teacher and the student, with pixel-wise transformation. In this paper, we newly find that aligning the feature maps between teacher and student along the channel-wise dimension is also effective for addressing the feature misalignment issue. Specifically, we propose a learnable nonlinear channel-wise transformation to align the features of the student and the teacher model. Based on this idea, we propose a simple and generic framework for feature distillation, with only one hyper-parameter to balance the distillation loss and the task-specific loss. Extensive experimental results show that our method achieves significant performance improvements in various computer vision tasks including image classification (+3.28% top-1 accuracy for MobileNetV1 on ImageNet-1K), object detection (+3.9% bbox mAP for ResNet50-based Faster-RCNN on MS COCO), instance segmentation (+2.8% Mask mAP for ResNet50-based Mask-RCNN), and semantic segmentation (+4.66% mIoU for ResNet18-based PSPNet in semantic segmentation on Cityscapes), which demonstrates the effectiveness and the versatility of the proposed method.

Cite

Text

Liu et al. "A Simple and Generic Framework for Feature Distillation via Channel-Wise Transformation." IEEE/CVF International Conference on Computer Vision Workshops, 2023. doi:10.1109/ICCVW60793.2023.00121

Markdown

[Liu et al. "A Simple and Generic Framework for Feature Distillation via Channel-Wise Transformation." IEEE/CVF International Conference on Computer Vision Workshops, 2023.](https://mlanthology.org/iccvw/2023/liu2023iccvw-simple/) doi:10.1109/ICCVW60793.2023.00121

BibTeX

@inproceedings{liu2023iccvw-simple,
  title     = {{A Simple and Generic Framework for Feature Distillation via Channel-Wise Transformation}},
  author    = {Liu, Ziwei and Wang, Yongtao and Chu, Xiaojie and Dong, Nan and Qi, Shengxiang and Ling, Haibin},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2023},
  pages     = {1121-1130},
  doi       = {10.1109/ICCVW60793.2023.00121},
  url       = {https://mlanthology.org/iccvw/2023/liu2023iccvw-simple/}
}