LDMVFI: Video Frame Interpolation with Latent Diffusion Models
Abstract
Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e.g. VGG loss) between their outputs and ground-truth frames. However, recent works have shown that these metrics are poor indicators of perceptual VFI quality. Towards developing perceptually-oriented VFI methods, in this work we propose latent diffusion model-based VFI, LDMVFI. This approaches the VFI problem from a generative perspective by formulating it as a conditional generation problem. As the first effort to address VFI using latent diffusion models, we rigorously benchmark our method on common test sets used in the existing VFI literature. Our quantitative experiments and user study indicate that LDMVFI is able to interpolate video content with favorable perceptual quality compared to the state of the art, even in the high-resolution regime. Our code is available at https://github.com/danier97/LDMVFI.
Cite
Text
Danier et al. "LDMVFI: Video Frame Interpolation with Latent Diffusion Models." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I2.27912Markdown
[Danier et al. "LDMVFI: Video Frame Interpolation with Latent Diffusion Models." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/danier2024aaai-ldmvfi/) doi:10.1609/AAAI.V38I2.27912BibTeX
@inproceedings{danier2024aaai-ldmvfi,
title = {{LDMVFI: Video Frame Interpolation with Latent Diffusion Models}},
author = {Danier, Duolikun and Zhang, Fan and Bull, David},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {1472-1480},
doi = {10.1609/AAAI.V38I2.27912},
url = {https://mlanthology.org/aaai/2024/danier2024aaai-ldmvfi/}
}