Pseudo-Siamese Blind-Spot Transformers for Self-Supervised Real-World Denoising
Abstract
Real-world image denoising remains a challenge task. This paper studies self-supervised image denoising, requiring only noisy images captured in a single shot. We revamping the blind-spot technique by leveraging the transformer’s capability for long-range pixel interactions, which is crucial for effectively removing noise dependence in relating pixel–a requirement for achieving great performance for the blind-spot technique. The proposed method integrates these elements with two key innovations: a directional self-attention (DSA) module using a half-plane grid for self-attention, creating a sophisticated blind-spot structure, and a Siamese architecture with mutual learning to mitigate the performance impactsfrom the restricted attention grid in DSA. Experiments on benchmark datasets demonstrate that our method outperforms existing self-supervised and clean-image-free methods. This combination of blind-spot and transformer techniques provides a natural synergy for tackling real-world image denoising challenges.
Cite
Text
Quan et al. "Pseudo-Siamese Blind-Spot Transformers for Self-Supervised Real-World Denoising." Neural Information Processing Systems, 2024. doi:10.52202/079017-0442Markdown
[Quan et al. "Pseudo-Siamese Blind-Spot Transformers for Self-Supervised Real-World Denoising." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/quan2024neurips-pseudosiamese/) doi:10.52202/079017-0442BibTeX
@inproceedings{quan2024neurips-pseudosiamese,
title = {{Pseudo-Siamese Blind-Spot Transformers for Self-Supervised Real-World Denoising}},
author = {Quan, Yuhui and Zheng, Tianxiang and Ji, Hui},
booktitle = {Neural Information Processing Systems},
year = {2024},
doi = {10.52202/079017-0442},
url = {https://mlanthology.org/neurips/2024/quan2024neurips-pseudosiamese/}
}