Fine-Grained Controllable Video Generation via Object Appearance and Context
Abstract
While text-to-video generation shows state-of-the-art results fine-grained output control remains challenging for users relying solely on natural language prompts. In this work we present FACTOR for fine-grained controllable video generation. FACTOR provides an intuitive interface where users can manipulate the trajectory and appearance of individual objects in conjunction with a text prompt. We propose a unified framework to integrate these control signals into an existing text-to-video model. Our approach involves a multimodal condition module with a joint encoder control-attention layers and an appearance augmentation mechanism. This design enables FACTOR to generate videos that closely align with detailed user specifications. Extensive experiments on standard benchmarks and user-provided inputs demonstrate a notable improvement in controllability by FACTOR over competitive baselines.
Cite
Text
Huang et al. "Fine-Grained Controllable Video Generation via Object Appearance and Context." Winter Conference on Applications of Computer Vision, 2025.Markdown
[Huang et al. "Fine-Grained Controllable Video Generation via Object Appearance and Context." Winter Conference on Applications of Computer Vision, 2025.](https://mlanthology.org/wacv/2025/huang2025wacv-finegrained/)BibTeX
@inproceedings{huang2025wacv-finegrained,
title = {{Fine-Grained Controllable Video Generation via Object Appearance and Context}},
author = {Huang, Hsin-Ping and Su, Yu-Chuan and Sun, Deqing and Jiang, Lu and Jia, Xuhui and Zhu, Yukun and Yang, Ming-Hsuan},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2025},
pages = {3698-3708},
url = {https://mlanthology.org/wacv/2025/huang2025wacv-finegrained/}
}