Blind Deblurring for Saturated Images
Abstract
Blind deblurring has received considerable attention in recent years. However, state-of-the-art methods often fail to process saturated blurry images. The main reason is that saturated pixels are not conforming to the commonly used linear blur model. Pioneer arts suggest excluding saturated pixels during the deblurring process, which sacrifices the informative edges from saturated regions and results in insufficient information for kernel estimation when large saturated regions exist. To address this problem, we introduce a new blur model to fit both saturated and unsaturated pixels, and all informative pixels can be considered during deblurring process. Based on our model, we develop an effective maximum a posterior (MAP)-based optimization framework. Quantitative and qualitative evaluations on benchmark datasets and challenging real-world examples show that the proposed method performs favorably against existing methods.
Cite
Text
Chen et al. "Blind Deblurring for Saturated Images." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00624Markdown
[Chen et al. "Blind Deblurring for Saturated Images." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/chen2021cvpr-blind/) doi:10.1109/CVPR46437.2021.00624BibTeX
@inproceedings{chen2021cvpr-blind,
title = {{Blind Deblurring for Saturated Images}},
author = {Chen, Liang and Zhang, Jiawei and Lin, Songnan and Fang, Faming and Ren, Jimmy S.},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2021},
pages = {6308-6316},
doi = {10.1109/CVPR46437.2021.00624},
url = {https://mlanthology.org/cvpr/2021/chen2021cvpr-blind/}
}