EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution
Abstract
An event camera detects the scene radiance changes and sends a sequence of asynchronous event streams with high dynamic range, high temporal resolution, and low latency. However, the spatial resolution of event cameras is limited as a trade-off for these outstanding properties. To reconstruct high-resolution intensity images from event data, we propose EvIntSR-Net that converts event data to multiple latent intensity frames to achieve super-resolution on intensity images in this paper. EvIntSR-Net bridges the domain gap between event streams and intensity frames and learns to merge a sequence of latent intensity frames in a recurrent updating manner. Experimental results show that EvIntSR-Net can reconstruct SR intensity images with higher dynamic range and fewer blurry artifacts by fusing events with intensity frames for both simulated and real-world data. Furthermore, the proposed EvIntSR-Net is able to generate high-frame-rate videos with super-resolved frames.
Cite
Text
Han et al. "EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00484Markdown
[Han et al. "EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/han2021iccv-evintsrnet/) doi:10.1109/ICCV48922.2021.00484BibTeX
@inproceedings{han2021iccv-evintsrnet,
title = {{EvIntSR-Net: Event Guided Multiple Latent Frames Reconstruction and Super-Resolution}},
author = {Han, Jin and Yang, Yixin and Zhou, Chu and Xu, Chao and Shi, Boxin},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {4882-4891},
doi = {10.1109/ICCV48922.2021.00484},
url = {https://mlanthology.org/iccv/2021/han2021iccv-evintsrnet/}
}