Dynamic Appearance Modelling from Minimal Cameras
Abstract
We present a novel method for modelling dynamic texture appearance from a minimal set of cameras. Previous methods to capture the dynamic appearance of a human from multi-view video have relied on large, expensive camera setups, and typically store texture on a frame-by-frame basis. We fit a parameterised human body model to multi-view video from minimal cameras (as few as 3), and combine the partial texture observations from multiple viewpoints and frames in a learned framework to generate full-body textures with dynamic details given an input pose. Key to our method are our multi-band loss functions, which apply separate blending functions to the high and low spatial frequencies to reduce texture artefacts. We evaluate our method on a range of multi-view datasets, and show that our model is able to accurately produce full-body dynamic textures, even with only partial camera coverage. We demonstrate that our method outperforms other texture generation methods on minimal camera setups.
Cite
Text
Bridgeman et al. "Dynamic Appearance Modelling from Minimal Cameras." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021. doi:10.1109/CVPRW53098.2021.00195Markdown
[Bridgeman et al. "Dynamic Appearance Modelling from Minimal Cameras." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021.](https://mlanthology.org/cvprw/2021/bridgeman2021cvprw-dynamic/) doi:10.1109/CVPRW53098.2021.00195BibTeX
@inproceedings{bridgeman2021cvprw-dynamic,
title = {{Dynamic Appearance Modelling from Minimal Cameras}},
author = {Bridgeman, Lewis and Guillemaut, Jean-Yves and Hilton, Adrian},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2021},
pages = {1760-1769},
doi = {10.1109/CVPRW53098.2021.00195},
url = {https://mlanthology.org/cvprw/2021/bridgeman2021cvprw-dynamic/}
}