DynoSurf: Neural Deformation-Based Temporally Consistent Dynamic Surface Reconstruction

Abstract

This paper explores the problem of reconstructing temporally consistent surfaces from a 3D point cloud sequence without correspondence. To address this challenging task, we propose DynoSurf, an unsupervised learning framework integrating a template surface representation with a learnable deformation field. Specifically, we design a coarse-to-fine strategy for learning the template surface based on the deformable tetrahedron representation. Furthermore, we propose a learnable deformation representation based on the learnable control points and blending weights, which can deform the template surface non-rigidly while maintaining the consistency of the local shape. Experimental results demonstrate the significant superiority of DynoSurf over current state-of-the-art approaches, showcasing its potential as a powerful tool for dynamic mesh reconstruction. The code is publicly available at https://github.com/yaoyx689/DynoSurf.

Cite

Text

Yao et al. "DynoSurf: Neural Deformation-Based Temporally Consistent Dynamic Surface Reconstruction." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73414-4_16

Markdown

[Yao et al. "DynoSurf: Neural Deformation-Based Temporally Consistent Dynamic Surface Reconstruction." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/yao2024eccv-dynosurf/) doi:10.1007/978-3-031-73414-4_16

BibTeX

@inproceedings{yao2024eccv-dynosurf,
  title     = {{DynoSurf: Neural Deformation-Based Temporally Consistent Dynamic Surface Reconstruction}},
  author    = {Yao, Yuxin and Ren, Siyu and Hou, Junhui and Deng, Zhi and Zhang, Juyong and Wang, Wenping},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73414-4_16},
  url       = {https://mlanthology.org/eccv/2024/yao2024eccv-dynosurf/}
}