Real Photographs Denoising with Noise Domain Adaptation and Attentive Generative Adversarial Network

Abstract

Nowadays, deep convolutional neural networks(CNNs) based methods have achieved favorable performance in synthetic noisy image denoising, but they are very limited in real photographs denoising since it's hard to obtain ground truth clean image to generate paired training data. Besides, the existing training datasets for real photographs denoising are too small. To solve this problem, we construct a new dataset and obtain corresponding ground truth by averaging, and then extend them through noise domain adaptation. Furthermore, we propose a attentive generative network by injecting visual attention into the generative network. During the training, visual attention map learn noise regions. The generative network will pay more attention to noise regions, which contributes to balancing between noise removal and texture preservation. Extensive experiments show that our method outperforms several state-of-the-art methods quantitatively and qualitatively.

Cite

Text

Lin et al. "Real Photographs Denoising with Noise Domain Adaptation and Attentive Generative Adversarial Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00221

Markdown

[Lin et al. "Real Photographs Denoising with Noise Domain Adaptation and Attentive Generative Adversarial Network." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/lin2019cvprw-real/) doi:10.1109/CVPRW.2019.00221

BibTeX

@inproceedings{lin2019cvprw-real,
  title     = {{Real Photographs Denoising with Noise Domain Adaptation and Attentive Generative Adversarial Network}},
  author    = {Lin, Kai and Li, Thomas H. and Liu, Shan and Li, Ge},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2019},
  pages     = {1717-1721},
  doi       = {10.1109/CVPRW.2019.00221},
  url       = {https://mlanthology.org/cvprw/2019/lin2019cvprw-real/}
}