E-CIR: Event-Enhanced Continuous Intensity Recovery
Abstract
A camera begins to sense light the moment we press the shutter button. During the exposure interval, relative motion between the scene and the camera causes motion blur, a common undesirable visual artifact. This paper presents E-CIR, which converts a blurry image into a sharp video represented as a parametric function from time to intensity. E-CIR leverages events as an auxiliary input. We discuss how to exploit the temporal event structure to construct the parametric bases. We demonstrate how to train a deep learning model to predict the function coefficients. To improve the appearance consistency, we further introduce a refinement module to propagate visual features among consecutive frames. Compared to state-of-the-art event-enhanced deblurring approaches, E-CIR generates smoother and more realistic results. The implementation of E-CIR is available at https://github.com/chensong1995/E-CIR.
Cite
Text
Song et al. "E-CIR: Event-Enhanced Continuous Intensity Recovery." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00765Markdown
[Song et al. "E-CIR: Event-Enhanced Continuous Intensity Recovery." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/song2022cvpr-ecir/) doi:10.1109/CVPR52688.2022.00765BibTeX
@inproceedings{song2022cvpr-ecir,
title = {{E-CIR: Event-Enhanced Continuous Intensity Recovery}},
author = {Song, Chen and Huang, Qixing and Bajaj, Chandrajit},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {7803-7812},
doi = {10.1109/CVPR52688.2022.00765},
url = {https://mlanthology.org/cvpr/2022/song2022cvpr-ecir/}
}