Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators

Abstract

Diffusion models are capable of generating impressive images conditioned on text descriptions, and extensions of these models allow users to edit images at a relatively coarse scale. However, the ability to precisely edit the layout, position, pose, and shape of objects in images with diffusion models is still difficult. To this end, we propose _motion guidance_, a zero-shot technique that allows a user to specify dense, complex motion fields that indicate where each pixel in an image should move. Motion guidance works by steering the diffusion sampling process with the gradients through an off-the-shelf optical flow network. Specifically, we design a guidance loss that encourages the sample to have the desired motion, as estimated by a flow network, while also being visually similar to the source image. By simultaneously sampling from a diffusion model and guiding the sample to have low guidance loss, we can obtain a motion-edited image. We demonstrate that our technique works on complex motions and produces high quality edits of real and generated images.

Cite

Text

Geng and Owens. "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators." International Conference on Learning Representations, 2024.

Markdown

[Geng and Owens. "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/geng2024iclr-motion/)

BibTeX

@inproceedings{geng2024iclr-motion,
  title     = {{Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators}},
  author    = {Geng, Daniel and Owens, Andrew},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/geng2024iclr-motion/}
}