Image-Guided Shape-from-Template Using Mesh Inextensibility Constraints

Abstract

Shape-from-Template (SfT) refers to the class of methods that reconstruct the 3D shape of a deforming object from images/videos using a 3D template. Traditional SfT methods require point correspondences between images and the texture of the 3D template in order to reconstruct 3D shapes from images/videos in real time. Their performance severely degrades when encountered with severe occlusions in the images because of the unavailability of correspondences. In contrast, modern SfT methods use a correspondence-free approach by incorporating deep neural networks to reconstruct 3D objects, thus requiring huge amounts of data for supervision. Recent advances use a fully unsupervised or self-supervised approach by combining differentiable physics and graphics to deform 3D template to match input images. In this paper, we propose an unsupervised SfT which uses only image observations: color features, gradients and silhouettes along with a mesh inextensibility constraint to reconstruct at a 400xfaster pace than (best-performing) unsupervised SfT. Moreover, when it comes to generating finer details and severe occlusions, our method outperforms the existing methodologies by a large margin. Code is available at https://github.com/dvttran/nsft.

Cite

Text

Tran et al. "Image-Guided Shape-from-Template Using Mesh Inextensibility Constraints." International Conference on Computer Vision, 2025.

Markdown

[Tran et al. "Image-Guided Shape-from-Template Using Mesh Inextensibility Constraints." International Conference on Computer Vision, 2025.](https://mlanthology.org/iccv/2025/tran2025iccv-imageguided/)

BibTeX

@inproceedings{tran2025iccv-imageguided,
  title     = {{Image-Guided Shape-from-Template Using Mesh Inextensibility Constraints}},
  author    = {Tran, Thuy and Chen, Ruochen and Parashar, Shaifali},
  booktitle = {International Conference on Computer Vision},
  year      = {2025},
  pages     = {7419-7428},
  url       = {https://mlanthology.org/iccv/2025/tran2025iccv-imageguided/}
}