GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models

Abstract

In this paper, we introduce GoodDrag, a novel approach to improve the stability and image quality of drag editing. Unlike existing methods that struggle with accumulated perturbations and often result in distortions, GoodDrag introduces an AlDD framework that alternates between drag and denoising operations within the diffusion process, effectively improving the fidelity of the result. We also propose an information-preserving motion supervision operation that maintains the original features of the starting point for precise manipulation and artifact reduction. In addition, we contribute to the benchmarking of drag editing by introducing a new dataset, Drag100, and developing dedicated quality assessment metrics, Dragging Accuracy Index and Gemini Score, utilizing Large Multimodal Models. Extensive experiments demonstrate that the proposed GoodDrag compares favorably against the state-of-the-art approaches both qualitatively and quantitatively. The source code and data are available at https://gooddrag.github.io.

Cite

Text

Zhang et al. "GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models." International Conference on Learning Representations, 2025.

Markdown

[Zhang et al. "GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhang2025iclr-gooddrag/)

BibTeX

@inproceedings{zhang2025iclr-gooddrag,
  title     = {{GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models}},
  author    = {Zhang, Zewei and Liu, Huan and Chen, Jun and Xu, Xiangyu},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/zhang2025iclr-gooddrag/}
}