Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation

Abstract

Current knowledge distillation (KD) methods for semantic segmentation focus on guiding the student to imitate the teacher's knowledge within homogeneous architectures. However, these methods overlook the diverse knowledge contained in architectures with different inductive biases, which is crucial for enabling the student to acquire a more precise and comprehensive understanding of the data during distillation. To this end, we propose for the first time a generic knowledge distillation method for semantic segmentation from a heterogeneous perspective, named HeteroAKD. Due to the substantial disparities between heterogeneous architectures, such as CNN and Transformer, directly transferring cross-architecture knowledge presents significant challenges. To eliminate the influence of architecture-specific information, the intermediate features of both the teacher and student are skillfully projected into an aligned logits space. Furthermore, to utilize diverse knowledge from heterogeneous architectures and deliver customized knowledge required by the student, a teacher-student knowledge mixing mechanism (KMM) and a teacher-student knowledge evaluation mechanism (KEM) are introduced. These mechanisms are performed by assessing the reliability and its discrepancy between heterogeneous teacher-student knowledge. Extensive experiments conducted on three main-stream benchmarks using various teacher-student pairs demonstrate that our HeteroAKD framework outperforms state-of-the-art KD methods in facilitating distillation between heterogeneous architectures.

Cite

Text

Huang et al. "Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I4.32399

Markdown

[Huang et al. "Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/huang2025aaai-distilling/) doi:10.1609/AAAI.V39I4.32399

BibTeX

@inproceedings{huang2025aaai-distilling,
  title     = {{Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation}},
  author    = {Huang, Yanglin and Hu, Kai and Zhang, Yuan and Chen, Zhineng and Gao, Xieping},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {3824-3832},
  doi       = {10.1609/AAAI.V39I4.32399},
  url       = {https://mlanthology.org/aaai/2025/huang2025aaai-distilling/}
}