Capturing Dynamic Textured Surfaces of Moving Targets

Abstract

We present an end-to-end system for reconstructing complete watertight and textured models of moving subjects such as clothed humans and animals, using only three or four handheld sensors. The heart of our framework is a new pairwise registration algorithm that minimizes, using a particle swarm strategy, an alignment error metric based on mutual visibility and occlusion. We show that this algorithm reliably registers partial scans with as little as 15 % overlap without requiring any initial correspondences, and outperforms alternative global registration algorithms. This registration algorithm allows us to reconstruct moving subjects from free-viewpoint video produced by consumer-grade sensors, without extensive sensor calibration, constrained capture volume, expensive arrays of cameras, or templates of the subject geometry.

Cite

Text

Wang et al. "Capturing Dynamic Textured Surfaces of Moving Targets." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46478-7_17

Markdown

[Wang et al. "Capturing Dynamic Textured Surfaces of Moving Targets." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/wang2016eccv-capturing/) doi:10.1007/978-3-319-46478-7_17

BibTeX

@inproceedings{wang2016eccv-capturing,
  title     = {{Capturing Dynamic Textured Surfaces of Moving Targets}},
  author    = {Wang, Ruizhe and Wei, Lingyu and Vouga, Etienne and Huang, Qixing and Ceylan, Duygu and Medioni, Gérard G. and Li, Hao},
  booktitle = {European Conference on Computer Vision},
  year      = {2016},
  pages     = {271-288},
  doi       = {10.1007/978-3-319-46478-7_17},
  url       = {https://mlanthology.org/eccv/2016/wang2016eccv-capturing/}
}