OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation

Abstract

Subject-to-Video (S2V) generation aims to create videos that faithfully incorporate reference content, providing enhanced flexibility in the production of videos. To establish the infrastructure for S2V generation, we propose **OpenS2V-Nexus**, consisting of (i) **OpenS2V‑Eval**, a fine‑grained benchmark, and (ii) **OpenS2V‑5M**, a million‑scale dataset. In contrast to existing S2V benchmarks inherited from VBench that focus on global and coarse-grained assessment of generated videos, *OpenS2V-Eval* focuses on the model's ability to generate subject-consistent videos with natural subject appearance and identity fidelity. For these purposes, *OpenS2V-Eval* introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, to accurately align human preferences with S2V benchmarks, we propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive evaluation of 18 representative S2V models, highlighting their strengths and weaknesses across different content. Moreover, we create the first open-source large-scale S2V generation dataset *OpenS2V-5M*, which consists of five million high-quality 720P subject-text-video triplets. Specifically, we ensure subject‐information diversity in our dataset by (1) segmenting subjects and building pairing information via cross‐video associations and (2) prompting GPT-4o on raw frames to synthesize multi-view representations. Through *OpenS2V-Nexus*, we deliver a robust infrastructure to accelerate future S2V generation research.

Cite

Text

Yuan et al. "OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation." Advances in Neural Information Processing Systems, 2025.

Markdown

[Yuan et al. "OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yuan2025neurips-opens2vnexus/)

BibTeX

@inproceedings{yuan2025neurips-opens2vnexus,
  title     = {{OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation}},
  author    = {Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Ma, Chongyang and Luo, Jiebo and Yuan, Li},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/yuan2025neurips-opens2vnexus/}
}