Zero-Shot Noise2Noise: Efficient Image Denoising Without Any Data
Abstract
Recently, self-supervised neural networks have shown excellent image denoising performance. However, current dataset free methods are either computationally expensive, require a noise model, or have inadequate image quality. In this work we show that a simple 2-layer network, without any training data or knowledge of the noise distribution, can enable high-quality image denoising at low computational cost. Our approach is motivated by Noise2Noise and Neighbor2Neighbor and works well for denoising pixel-wise independent noise. Our experiments on artificial, real-world camera, and microscope noise show that our method termed ZS-N2N (Zero Shot Noise2Noise) often outperforms existing dataset-free methods at a reduced cost, making it suitable for use cases with scarce data availability and limited compute.
Cite
Text
Mansour and Heckel. "Zero-Shot Noise2Noise: Efficient Image Denoising Without Any Data." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01347Markdown
[Mansour and Heckel. "Zero-Shot Noise2Noise: Efficient Image Denoising Without Any Data." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/mansour2023cvpr-zeroshot/) doi:10.1109/CVPR52729.2023.01347BibTeX
@inproceedings{mansour2023cvpr-zeroshot,
title = {{Zero-Shot Noise2Noise: Efficient Image Denoising Without Any Data}},
author = {Mansour, Youssef and Heckel, Reinhard},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {14018-14027},
doi = {10.1109/CVPR52729.2023.01347},
url = {https://mlanthology.org/cvpr/2023/mansour2023cvpr-zeroshot/}
}