Deep Model-Based Super-Resolution with Non-Uniform Blur
Abstract
We propose a state-of-the-art method for super-resolution with non-uniform blur. Single-image super-resolution methods seek to restore a high-resolution image from blurred, subsampled, and noisy measurements. Despite their impressive performance, existing techniques usually assume a uniform blur kernel. Hence, these techniques do not generalize well to the more general case of non-uniform blur. Instead, in this paper, we address the more realistic and computationally challenging case of spatially-varying blur. To this end, we first propose a fast deep plug-and-play algorithm, based on linearized ADMM splitting techniques, which can solve the super-resolution problem with spatially-varying blur. Second, we unfold our iterative algorithm into a single network and train it end-to-end. In this way, we overcome the intricacy of manually tuning the parameters involved in the optimization scheme. Our algorithm presents remarkable performance and generalizes well after a single training to a large family of spatially-varying blur kernels, noise levels and scale factors.
Cite
Text
Laroche et al. "Deep Model-Based Super-Resolution with Non-Uniform Blur." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Laroche et al. "Deep Model-Based Super-Resolution with Non-Uniform Blur." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/laroche2023wacv-deep/)BibTeX
@inproceedings{laroche2023wacv-deep,
title = {{Deep Model-Based Super-Resolution with Non-Uniform Blur}},
author = {Laroche, Charles and Almansa, Andrés and Tassano, Matias},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {1797-1808},
url = {https://mlanthology.org/wacv/2023/laroche2023wacv-deep/}
}