Noise2Same: Optimizing a Self-Supervised Bound for Image Denoising

Abstract

Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J-invariant. However, our analyses indicate that the current theory and the J-invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J-invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency.

Cite

Text

Xie et al. "Noise2Same: Optimizing a Self-Supervised Bound for Image Denoising." Neural Information Processing Systems, 2020.

Markdown

[Xie et al. "Noise2Same: Optimizing a Self-Supervised Bound for Image Denoising." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/xie2020neurips-noise2same/)

BibTeX

@inproceedings{xie2020neurips-noise2same,
  title     = {{Noise2Same: Optimizing a Self-Supervised Bound for Image Denoising}},
  author    = {Xie, Yaochen and Wang, Zhengyang and Ji, Shuiwang},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/xie2020neurips-noise2same/}
}