Hybrid Neural Fusion for Full-Frame Video Stabilization

Abstract

Existing video stabilization methods often generate visible distortion or require aggressive cropping of frame boundaries, resulting in smaller field of views. In this work, we present a frame synthesis algorithm to achieve full-frame video stabilization. We first estimate dense warp fields from neighboring frames and then synthesize the stabilized frame by fusing the warped contents. Our core technical novelty lies in the learning-based hybrid-space fusion that alleviates artifacts caused by optical flow inaccuracy and fast-moving objects. We validate the effectiveness of our method on the NUS, selfie, and DeepStab video datasets. Extensive experiment results demonstrate the merits of our approach over prior video stabilization methods.

Cite

Text

Liu et al. "Hybrid Neural Fusion for Full-Frame Video Stabilization." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00230

Markdown

[Liu et al. "Hybrid Neural Fusion for Full-Frame Video Stabilization." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/liu2021iccv-hybrid/) doi:10.1109/ICCV48922.2021.00230

BibTeX

@inproceedings{liu2021iccv-hybrid,
  title     = {{Hybrid Neural Fusion for Full-Frame Video Stabilization}},
  author    = {Liu, Yu-Lun and Lai, Wei-Sheng and Yang, Ming-Hsuan and Chuang, Yung-Yu and Huang, Jia-Bin},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {2299-2308},
  doi       = {10.1109/ICCV48922.2021.00230},
  url       = {https://mlanthology.org/iccv/2021/liu2021iccv-hybrid/}
}