ViDeNN: Deep Blind Video Denoising
Abstract
We propose ViDeNN: a CNN for Video Denoising without prior knowledge on the noise distribution (blind denoising). The CNN architecture uses a combination of spatial and temporal filtering, learning to spatially denoise the frames first and at the same time how to combine their temporal information, handling objects motion, brightness changes, low-light conditions and temporal inconsistencies. We demonstrate the importance of the data used for CNNs training, creating for this purpose a specific dataset for low-light conditions. We test ViDeNN on common benchmarks and on self-collected data, achieving good results comparable with the state-of-the-art.
Cite
Text
Claus and van Gemert. "ViDeNN: Deep Blind Video Denoising." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019. doi:10.1109/CVPRW.2019.00235Markdown
[Claus and van Gemert. "ViDeNN: Deep Blind Video Denoising." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.](https://mlanthology.org/cvprw/2019/claus2019cvprw-videnn/) doi:10.1109/CVPRW.2019.00235BibTeX
@inproceedings{claus2019cvprw-videnn,
title = {{ViDeNN: Deep Blind Video Denoising}},
author = {Claus, Michele and van Gemert, Jan},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2019},
pages = {1843-1852},
doi = {10.1109/CVPRW.2019.00235},
url = {https://mlanthology.org/cvprw/2019/claus2019cvprw-videnn/}
}