Single Image Defocus Deblurring via Implicit Neural Inverse Kernels

Abstract

Single image defocus deblurring (SIDD) is a challenging task due to the spatially-varying nature of defocus blur, characterized by per-pixel point spread functions (PSFs). Existing deep-learning-based methods for SIDD are limited by either over-fitting due to the lack of model constraints or under-parametrization that restricts their applicability to real-world images. To address the limitations, this paper proposes an interpretable approach that explicitly predicts inverse kernels with structural regularization. Motivated by the observation that defocus PSFs within an image often have similar shapes but different sizes, we represent the inverse kernels linearly over a multi-scale dictionary parameterized by implicit neural representations. We predict the corresponding representation coefficients via a duplex scale-recurrent neural network that jointly performs fine-to-coarse and coarse-to-fine estimations. Extensive experiments demonstrate that our approach achieves excellent performance using a lightweight model.

Cite

Text

Quan et al. "Single Image Defocus Deblurring via Implicit Neural Inverse Kernels." International Conference on Computer Vision, 2023. doi:10.1109/ICCV51070.2023.01158

Markdown

[Quan et al. "Single Image Defocus Deblurring via Implicit Neural Inverse Kernels." International Conference on Computer Vision, 2023.](https://mlanthology.org/iccv/2023/quan2023iccv-single/) doi:10.1109/ICCV51070.2023.01158

BibTeX

@inproceedings{quan2023iccv-single,
  title     = {{Single Image Defocus Deblurring via Implicit Neural Inverse Kernels}},
  author    = {Quan, Yuhui and Yao, Xin and Ji, Hui},
  booktitle = {International Conference on Computer Vision},
  year      = {2023},
  pages     = {12600-12610},
  doi       = {10.1109/ICCV51070.2023.01158},
  url       = {https://mlanthology.org/iccv/2023/quan2023iccv-single/}
}