OmniVCus: Feedforward Subject-Driven Video Customization with Multimodal Control Conditions
Abstract
Existing feedforward subject-driven video customization methods mainly study single-subject scenarios due to the difficulty of constructing multi-subject training data pairs. Another challenging problem that how to use the signals such as depth, mask, camera, and text prompts to control and edit the subject in the customized video is still less explored. In this paper, we first propose a data construction pipeline, VideoCus-Factory, to produce training data pairs for multi-subject customization from raw videos without labels and control signals such as depth-to-video and mask-to-video pairs. Based on our constructed data, we develop an Image-Video Transfer Mixed (IVTM) training with image editing data to enable instructive editing for the subject in the customized video. Then we propose a diffusion Transformer framework, OmniVCus, with two embedding mechanisms, Lottery Embedding (LE) and Temporally Aligned Embedding (TAE). LE enables inference with more subjects by using the training subjects to activate more frame embeddings. TAE encourages the generation process to extract guidance from temporally aligned control signals by assigning the same frame embeddings to the control and noise tokens. Experiments demonstrate that our method significantly surpasses state-of-the-art methods in both quantitative and qualitative evaluations. Project page is at https://caiyuanhao1998.github.io/project/OmniVCus/
Cite
Text
Cai et al. "OmniVCus: Feedforward Subject-Driven Video Customization with Multimodal Control Conditions." Advances in Neural Information Processing Systems, 2025.Markdown
[Cai et al. "OmniVCus: Feedforward Subject-Driven Video Customization with Multimodal Control Conditions." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/cai2025neurips-omnivcus/)BibTeX
@inproceedings{cai2025neurips-omnivcus,
title = {{OmniVCus: Feedforward Subject-Driven Video Customization with Multimodal Control Conditions}},
author = {Cai, Yuanhao and Zhang, He and Chen, Xi and Xing, Jinbo and Hu, Yiwei and Zhou, Yuqian and Zhang, Kai and Zhang, Zhifei and Kim, Soo Ye and Wang, Tianyu and Zhang, Yulun and Yang, Xiaokang and Lin, Zhe and Yuille, Alan},
booktitle = {Advances in Neural Information Processing Systems},
year = {2025},
url = {https://mlanthology.org/neurips/2025/cai2025neurips-omnivcus/}
}