VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing

Abstract

Recent advancements in diffusion models have significantly improved video generation and editing capabilities. However, multi-grained video editing, which encompasses class-level, instance-level, and part-level modifications, remains a formidable challenge. The major difficulties in multi-grained editing include semantic misalignment of text-to-region control and feature coupling within the diffusion model. To address these difficulties, we present VideoGrain, a zero-shot approach that modulates space-time (cross- and self-) attention mechanisms to achieve fine-grained control over video content. We enhance text-to-region control by amplifying each local prompt's attention to its corresponding spatial-disentangled region while minimizing interactions with irrelevant areas in cross-attention. Additionally, we improve feature separation by increasing intra-region awareness and reducing inter-region interference in self-attention. Extensive experiments demonstrate our method achieves state-of-the-art performance in real-world scenarios. Our code, data, and demos are available on the [project page](https://knightyxp.github.io/VideoGrain_project_page/).

Cite

Text

Yang et al. "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing." International Conference on Learning Representations, 2025.

Markdown

[Yang et al. "VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/yang2025iclr-videograin/)

BibTeX

@inproceedings{yang2025iclr-videograin,
  title     = {{VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing}},
  author    = {Yang, Xiangpeng and Zhu, Linchao and Fan, Hehe and Yang, Yi},
  booktitle = {International Conference on Learning Representations},
  year      = {2025},
  url       = {https://mlanthology.org/iclr/2025/yang2025iclr-videograin/}
}