Motion Modes: What Could Happen Next?

Abstract

Predicting diverse object motions from a single static image remains challenging, as current video generation models often entangle object movement with camera motion and other scene changes. While recent methods can predict specific motions from motion arrow input, they rely on synthetic data and predefined motions, limiting their application to complex scenes. We introduce Motion Modes, a training-free approach that explores a pre-trained image-to-video generator's latent distribution to discover various distinct and plausible motions focused on selected objects in static images. We achieve this by employing a flow generator guided by energy functions designed to disentangle object and camera motion. Additionally, we use an energy inspired by particle guidance to diversify the generated motions, without requiring explicit training data. Experimental results demonstrate that Motion Modes generates realistic and varied object animations, surpassing previous methods and even human predictions regarding plausibility and diversity.

Cite

Text

Pandey et al. "Motion Modes: What Could Happen Next?." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.00195

Markdown

[Pandey et al. "Motion Modes: What Could Happen Next?." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/pandey2025cvpr-motion/) doi:10.1109/CVPR52734.2025.00195

BibTeX

@inproceedings{pandey2025cvpr-motion,
  title     = {{Motion Modes: What Could Happen Next?}},
  author    = {Pandey, Karran and Hold-Geoffroy, Yannick and Gadelha, Matheus and Mitra, Niloy J. and Singh, Karan and Guerrero, Paul},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2025},
  pages     = {2030-2039},
  doi       = {10.1109/CVPR52734.2025.00195},
  url       = {https://mlanthology.org/cvpr/2025/pandey2025cvpr-motion/}
}