Weight-Sharing Method for Upsampling Layer from Feature Embedding Recursive Block

Abstract

In the field of super-resolution, the Laplacian pyramid framework-based model needs to estimate the result of the inverse convolution for upscaling layers. Generally, the transposed convolution is applied to estimate the result close to the inverse convolution. In this process, the transposed convolution can be designed efficiently to reduce the trainable weights. In this study, we propose a new model compression method that replaces the transposed convolution layer by sharing the weights of the convolution layer trained in the feature embedding recursive block. The proposed weight-sharing method effectively reduces training complexity and training time. The experiments demonstrate the results accordingly, even for relatively large image sizes.

Cite

Text

Hyun et al. "Weight-Sharing Method for Upsampling Layer from Feature Embedding Recursive Block." NeurIPS 2024 Workshops: Compression, 2024.

Markdown

[Hyun et al. "Weight-Sharing Method for Upsampling Layer from Feature Embedding Recursive Block." NeurIPS 2024 Workshops: Compression, 2024.](https://mlanthology.org/neuripsw/2024/hyun2024neuripsw-weightsharing/)

BibTeX

@inproceedings{hyun2024neuripsw-weightsharing,
  title     = {{Weight-Sharing Method for Upsampling Layer from Feature Embedding Recursive Block}},
  author    = {Hyun, Jinwoo and Hyon, YunKyong and Lee, Mira and Lee, Sunju and Ha, Taeyoung and Kim, Young Rock},
  booktitle = {NeurIPS 2024 Workshops: Compression},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/hyun2024neuripsw-weightsharing/}
}