PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation
Abstract
Text-to-video (T2V) generation has been recently enabled by transformer-based diffusion models, but current T2V models lack capabilities in adhering to the real-world common knowledge and physical rules, due to their limited understanding of physical realism and deficiency in temporal modeling. Existing solutions are either data-driven or require extra model inputs, but cannot be generalizable to out-of-distribution domains. In this paper, we present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains, by enabling chain-of-thought and step-back reasoning in T2V prompting. Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers.
Cite
Text
Xue et al. "PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation." Conference on Computer Vision and Pattern Recognition, 2025. doi:10.1109/CVPR52734.2025.01754Markdown
[Xue et al. "PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation." Conference on Computer Vision and Pattern Recognition, 2025.](https://mlanthology.org/cvpr/2025/xue2025cvpr-phyt2v/) doi:10.1109/CVPR52734.2025.01754BibTeX
@inproceedings{xue2025cvpr-phyt2v,
title = {{PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation}},
author = {Xue, Qiyao and Yin, Xiangyu and Yang, Boyuan and Gao, Wei},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2025},
pages = {18826-18836},
doi = {10.1109/CVPR52734.2025.01754},
url = {https://mlanthology.org/cvpr/2025/xue2025cvpr-phyt2v/}
}