Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation

Abstract

We present a method for generating video sequences with coherent motion between a pair of input keyframes. We adapt a pretrained large-scale image-to-video diffusion model (originally trained to generate videos moving forward in time from a single input image) for keyframe interpolation, i.e., to produce a video between two input frames. We accomplish this adaptation through a lightweight fine-tuning technique that produces a version of the model that instead predicts videos moving backwards in time from a single input image. This model (along with the original forward-moving model) is subsequently used in a dual-directional diffusion sampling process that combines the overlapping model estimates starting from each of the two keyframes. Our experiments shows that our method outperforms both existing diffusion-based methods and traditional frame interpolation techniques.

Cite

Text

Wang et al. "Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation." International Conference on Learning Representations, 2025.

Markdown

[Wang et al. "Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/wang2025iclr-generative/)

BibTeX

@inproceedings{wang2025iclr-generative,
  title     = {{Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation}},
  author    = {Wang, Xiaojuan and Zhou, Boyang and Curless, Brian and Kemelmacher-Shlizerman, Ira and Holynski, Aleksander and Seitz, Steve},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/wang2025iclr-generative/}
}