V2M4: 4D Mesh Animation Reconstruction from a Single Monocular Video

Abstract

We present V2M4, a novel 4D reconstruction method that directly generates a usable 4D mesh animation asset from a single monocular video. Unlike existing approaches that rely on priors from multi-view image and video generation models, our method is based on native 3D mesh generation models. Naively applying 3D mesh generation models to generate a mesh for each frame in a 4D task can lead to issues such as incorrect mesh poses, misalignment of mesh appearance, and inconsistencies in mesh geometry and texture maps. To address these problems, we propose a structured workflow that includes camera search and mesh reposing, condition embedding optimization for mesh appearance refinement, pairwise mesh registration for topology consistency, and global texture map optimization for texture consistency. Our method outputs high-quality 4D animated assets that are compatible with mainstream graphics and game software. Experimental results across a variety of animation types and motion amplitudes demonstrate the generalization and effectiveness of our method.

Cite

Text

Chen et al. "V2M4: 4D Mesh Animation Reconstruction from a Single Monocular Video." International Conference on Computer Vision, 2025.

Markdown

[Chen et al. "V2M4: 4D Mesh Animation Reconstruction from a Single Monocular Video." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/chen2025iccv-v2m4/)

BibTeX

@inproceedings{chen2025iccv-v2m4,
  title     = {{V2M4: 4D Mesh Animation Reconstruction from a Single Monocular Video}},
  author    = {Chen, Jianqi and Zhang, Biao and Tang, Xiangjun and Wonka, Peter},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {11643-11653},
  url       = {https://mlanthology.org/iccv/2025/chen2025iccv-v2m4/}
}