Memory-Oriented Structural Pruning for Efficient Image Restoration
Abstract
Deep learning (DL) based methods have significantly pushed forward the state-of-the-art for image restoration (IR) task. Nevertheless, DL-based IR models are highly computation- and memory-intensive. The surging demands for processing higher-resolution images and multi-task paralleling in practical mobile usage further add to their computation and memory burdens. In this paper, we reveal the overlooked memory redundancy of the IR models and propose a Memory-Oriented Structural Pruning (MOSP) method. To properly compress the long-range skip connections (a major source of the memory burden), we introduce a compactor module onto each skip connection to decouple the pruning of the skip connections and the main branch. MOSP progressively prunes the original model layers and the compactors to cut down the peak memory while maintaining high IR quality. Experiments on real image denoising, image super-resolution and low-light image enhancement show that MOSP can yield models with higher memory efficiency while better preserving performance compared with baseline pruning methods.
Cite
Text
Shi et al. "Memory-Oriented Structural Pruning for Efficient Image Restoration." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I2.25319Markdown
[Shi et al. "Memory-Oriented Structural Pruning for Efficient Image Restoration." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/shi2023aaai-memory/) doi:10.1609/AAAI.V37I2.25319BibTeX
@inproceedings{shi2023aaai-memory,
title = {{Memory-Oriented Structural Pruning for Efficient Image Restoration}},
author = {Shi, Xiangsheng and Ning, Xuefei and Guo, Lidong and Zhao, Tianchen and Liu, Enshu and Cai, Yi and Dong, Yuhan and Yang, Huazhong and Wang, Yu},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2023},
pages = {2245-2253},
doi = {10.1609/AAAI.V37I2.25319},
url = {https://mlanthology.org/aaai/2023/shi2023aaai-memory/}
}