DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras
Abstract
We propose DeepMultiCap, a novel method for multi-person performance capture using sparse multi-view cameras. Our method can capture time varying surface details without the need of using pre-scanned template models. To tackle with the serious occlusion challenge for close interacting scenes, we combine a recently proposed pixel-aligned implicit function with parametric model for robust reconstruction of the invisible surface areas. An effective attention-aware module is designed to obtain the fine-grained geometry details from multi-view images, where high-fidelity results can be generated. In addition to the spatial attention method, for video inputs, we further propose a novel temporal fusion method to alleviate the noise and temporal inconsistencies for moving character reconstruction. For quantitative evaluation, we contribute a high quality multi-person dataset, MultiHuman, which consists of 150 static scenes with different levels of occlusions and ground truth 3D human models. Experimental results demonstrate the state-of-the-art performance of our method and the well generalization to real multiview video data, which outperforms the prior works by a large margin.
Cite
Text
Zheng et al. "DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00618Markdown
[Zheng et al. "DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/zheng2021iccv-deepmulticap/) doi:10.1109/ICCV48922.2021.00618BibTeX
@inproceedings{zheng2021iccv-deepmulticap,
title = {{DeepMultiCap: Performance Capture of Multiple Characters Using Sparse Multiview Cameras}},
author = {Zheng, Yang and Shao, Ruizhi and Zhang, Yuxiang and Yu, Tao and Zheng, Zerong and Dai, Qionghai and Liu, Yebin},
booktitle = {International Conference on Computer Vision},
year = {2021},
pages = {6239-6249},
doi = {10.1109/ICCV48922.2021.00618},
url = {https://mlanthology.org/iccv/2021/zheng2021iccv-deepmulticap/}
}