Event-Based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields
Abstract
Video Frame Interpolation (VFI) aims to generate intermediate video frames between consecutive input frames. Since the event cameras are bio-inspired sensors that only encode brightness changes with a micro-second temporal resolution, several works utilized the event camera to enhance the performance of VFI. However, existing methods estimate bidirectional inter-frame motion fields with only events or approximations, which can not consider the complex motion in real-world scenarios. In this paper, we propose a novel event-based VFI framework with cross-modal asymmetric bidirectional motion field estimation. In detail, our EIF-BiOFNet utilizes each valuable characteristic of the events and images for direct estimation of inter-frame motion fields without any approximation methods.Moreover, we develop an interactive attention-based frame synthesis network to efficiently leverage the complementary warping-based and synthesis-based features. Finally, we build a large-scale event-based VFI dataset, ERF-X170FPS, with a high frame rate, extreme motion, and dynamic textures to overcome the limitations of previous event-based VFI datasets. Extensive experimental results validate that our method shows significant performance improvement over the state-of-the-art VFI methods on various datasets.Our project pages are available at: https://github.com/intelpro/CBMNet
Cite
Text
Kim et al. "Event-Based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.01729Markdown
[Kim et al. "Event-Based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/kim2023cvpr-eventbased/) doi:10.1109/CVPR52729.2023.01729BibTeX
@inproceedings{kim2023cvpr-eventbased,
title = {{Event-Based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields}},
author = {Kim, Taewoo and Chae, Yujeong and Jang, Hyun-Kurl and Yoon, Kuk-Jin},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2023},
pages = {18032-18042},
doi = {10.1109/CVPR52729.2023.01729},
url = {https://mlanthology.org/cvpr/2023/kim2023cvpr-eventbased/}
}