Efficient Lightweight Image Denoising with Triple Attention Transformer
Abstract
Transformer has shown outstanding performance on image denoising, but the existing Transformer methods for image denoising are with large model sizes and high computational complexity, which is unfriendly to resource-constrained devices. In this paper, we propose a Lightweight Image Denoising Transformer method (LIDFormer) based on Triple Multi-Dconv Head Transposed Attention (TMDTA) to boost computational efficiency. LIDFormer first implements Discrete Wavelet Transform (DWT), which transforms the input image into a low-frequency space, greatly reducing the computational complexity of image denoising. However, the low-frequency image lacks fine-feature information, which degrades the denoising performance. To handle this problem, we introduce the Complementary Periodic Feature Reusing (CPFR) scheme for aggregating the shallow-layer features and the deep-layer features. Furthermore, TMDTA is proposed to integrate global context along three dimensions, thereby enhancing the ability of global feature representation. Note that our method can be applied as a pipeline for both convolutional neural networks and Transformers. Extensive experiments on several benchmarks demonstrate that the proposed LIDFormer achieves a better trade-off between high performance and low computational complexity on real-world image denoising tasks.
Cite
Text
Zhou et al. "Efficient Lightweight Image Denoising with Triple Attention Transformer." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I7.28604Markdown
[Zhou et al. "Efficient Lightweight Image Denoising with Triple Attention Transformer." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhou2024aaai-efficient-a/) doi:10.1609/AAAI.V38I7.28604BibTeX
@inproceedings{zhou2024aaai-efficient-a,
title = {{Efficient Lightweight Image Denoising with Triple Attention Transformer}},
author = {Zhou, Yubo and Lin, Jin and Ye, Fangchen and Qu, Yanyun and Xie, Yuan},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {7704-7712},
doi = {10.1609/AAAI.V38I7.28604},
url = {https://mlanthology.org/aaai/2024/zhou2024aaai-efficient-a/}
}