Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring
Abstract
Real-time video deblurring still remains a challenging task due to the complexity of spatially and temporally varying blur itself and the requirement of low computational cost. To improve the network efficiency, we adopt residual dense blocks into RNN cells, so as to efficiently extract the spatial features of the current frame. Furthermore, a global spatio-temporal attention module is proposed to fuse the effective hierarchical features from past and future frames to help better deblur the current frame. For evaluation, we also collect a novel dataset with paired blurry/sharp video clips by using a co-axis beam splitter system. Through experiments on synthetic and realistic datasets, we show that our proposed method can achieve better deblurring performance both quantitatively and qualitatively with less computational cost against state-of-the-art video deblurring methods.
Cite
Text
Zhong et al. "Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58539-6_12Markdown
[Zhong et al. "Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/zhong2020eccv-efficient/) doi:10.1007/978-3-030-58539-6_12BibTeX
@inproceedings{zhong2020eccv-efficient,
title = {{Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring}},
author = {Zhong, Zhihang and Gao, Ye and Zheng, Yinqiang and Zheng, Bo},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58539-6_12},
url = {https://mlanthology.org/eccv/2020/zhong2020eccv-efficient/}
}