Towards Compact Single Image Super-Resolution via Contrastive Self-Distillation

Abstract

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.

Cite

Text

Wang et al. "Towards Compact Single Image Super-Resolution via Contrastive Self-Distillation." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/155

Markdown

[Wang et al. "Towards Compact Single Image Super-Resolution via Contrastive Self-Distillation." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/wang2021ijcai-compact/) doi:10.24963/IJCAI.2021/155

BibTeX

@inproceedings{wang2021ijcai-compact,
  title     = {{Towards Compact Single Image Super-Resolution via Contrastive Self-Distillation}},
  author    = {Wang, Yanbo and Lin, Shaohui and Qu, Yanyun and Wu, Haiyan and Zhang, Zhizhong and Xie, Yuan and Yao, Angela},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2021},
  pages     = {1122-1128},
  doi       = {10.24963/IJCAI.2021/155},
  url       = {https://mlanthology.org/ijcai/2021/wang2021ijcai-compact/}
}