Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion
Abstract
We introduce , a training-free video editing algorithm for localized semantic edits. allows users to use any editing software, including Photoshop and generative inpainting, to modify the first frame; it automatically propagates those changes, with semantic, spatial, and temporally consistent motion, to the remaining frames. Unlike existing methods that enable edits only through imprecise textual instructions, allows users to add or remove objects, semantically change objects, insert stock photos into videos, etc. with fine-grained control over locations and appearance. We achieve this through image-based video editing by inverting latents with noise extrapolation, from which we generate videos conditioned on the edited image. produces higher quality edits against 6 baselines on 2 editing benchmarks using 10 evaluation metrics.
Cite
Text
Fan et al. "Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73254-6_14Markdown
[Fan et al. "Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/fan2024eccv-videoshop/) doi:10.1007/978-3-031-73254-6_14BibTeX
@inproceedings{fan2024eccv-videoshop,
title = {{Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion}},
author = {Fan, Xiang and Bhattad, Anand and Krishna, Ranjay},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73254-6_14},
url = {https://mlanthology.org/eccv/2024/fan2024eccv-videoshop/}
}