Dual-Domain Attention for Image Deblurring

Abstract

As a long-standing and challenging task, image deblurring aims to reconstruct the latent sharp image from its degraded counterpart. In this study, to bridge the gaps between degraded/sharp image pairs in the spatial and frequency domains simultaneously, we develop the dual-domain attention mechanism for image deblurring. Self-attention is widely used in vision tasks, however, due to the quadratic complexity, it is not applicable to image deblurring with high-resolution images. To alleviate this issue, we propose a novel spatial attention module by implementing self-attention in the style of dynamic group convolution for integrating information from the local region, enhancing the representation learning capability and reducing computational burden. Regarding frequency domain learning, many frequency-based deblurring approaches either treat the spectrum as a whole or decompose frequency components in a complicated manner. In this work, we devise a frequency attention module to compactly decouple the spectrum into distinct frequency parts and accentuate the informative part with extremely lightweight learnable parameters. Finally, we incorporate attention modules into a U-shaped network. Extensive comparisons with prior arts on the common benchmarks show that our model, named Dual-domain Attention Network (DDANet), obtains comparable results with a significantly improved inference speed.

Cite

Text

Cui et al. "Dual-Domain Attention for Image Deblurring." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I1.25122

Markdown

[Cui et al. "Dual-Domain Attention for Image Deblurring." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/cui2023aaai-dual/) doi:10.1609/AAAI.V37I1.25122

BibTeX

@inproceedings{cui2023aaai-dual,
  title     = {{Dual-Domain Attention for Image Deblurring}},
  author    = {Cui, Yuning and Tao, Yi and Ren, Wenqi and Knoll, Alois},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {479-487},
  doi       = {10.1609/AAAI.V37I1.25122},
  url       = {https://mlanthology.org/aaai/2023/cui2023aaai-dual/}
}