Knowledge Distillation for Learned Image Compression

Abstract

Recently, learned image compression (LIC) models have achieved remarkable rate-distortion (RD) performance, yet their high computational complexity severely limits practical deployment. To overcome this challenge, we propose a novel Stage-wise Modular Distillation framework, SMoDi, which efficiently compresses LIC models while preserving RD performance. This framework treats each stage of LIC models as an independent sub-task, mirroring the teacher model's task decomposition to the student, thereby simplifying knowledge transfer. We identify two crucial factors determining the effectiveness of knowledge distillation: student model construction and loss function design. Specifically, we first propose Teacher-Guided Student Model Construction, a pruning-like method ensuring architectural consistency between teacher and student models. Next, we introduce Implicit End-to-end Supervision, facilitating adaptive energy compaction and bitrate regularization. Based on these insights, we develop KDIC, a lightweight student model derived from the state-of-the-art S2CFormer model. Experimental results demonstrate that KDIC achieves top-tier RD performance with significantly reduced computational complexity. To our knowledge, this work is among the first successful applications of knowledge distillation to learned image compression.

Cite

Text

Chen et al. "Knowledge Distillation for Learned Image Compression." International Conference on Computer Vision, 2025.

Markdown

[Chen et al. "Knowledge Distillation for Learned Image Compression." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/chen2025iccv-knowledge/)

BibTeX

@inproceedings{chen2025iccv-knowledge,
  title     = {{Knowledge Distillation for Learned Image Compression}},
  author    = {Chen, Yunuo and Lyu, Zezheng and He, Bing and Cao, Ning and Chen, Gang and Lu, Guo and Zhang, Wenjun},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {4996-5006},
  url       = {https://mlanthology.org/iccv/2025/chen2025iccv-knowledge/}
}