MotionShot: Adaptive Motion Transfer Across Arbitrary Objects for Text-to-Video Generation
Abstract
Existing text-to-video methods struggle to transfer motion smoothly from a reference object to a target object with significant differences in appearance or structure between them. To address this challenge, we introduce MotionShot, a training-free framework capable of parsing reference-target correspondences in a fine-grained manner, thereby achieving high-fidelity motion transfer while preserving coherence in appearance. To be specific, MotionShot first performs semantic feature matching to ensure high-level alignments between the reference and target objects. It then further establishes low-level morphological alignments through reference-to-target shape retargeting. By encoding motion with temporal attention, our MotionShot can coherently transfer motion across objects, even in the presence of significant appearance and structure disparities, demonstrated by extensive experiments. The project page is available at: https://motionshot.github.io/.
Cite
Text
Liu et al. "MotionShot: Adaptive Motion Transfer Across Arbitrary Objects for Text-to-Video Generation." International Conference on Computer Vision, 2025.Markdown
[Liu et al. "MotionShot: Adaptive Motion Transfer Across Arbitrary Objects for Text-to-Video Generation." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/liu2025iccv-motionshot/)BibTeX
@inproceedings{liu2025iccv-motionshot,
title = {{MotionShot: Adaptive Motion Transfer Across Arbitrary Objects for Text-to-Video Generation}},
author = {Liu, Yanchen and Sun, Yanan and Xing, Zhening and Gao, Junyao and Chen, Kai and Pei, Wenjie},
booktitle = {International Conference on Computer Vision},
year = {2025},
pages = {11861-11871},
url = {https://mlanthology.org/iccv/2025/liu2025iccv-motionshot/}
}