Deep Learning Whole Body Point Cloud Scans from a Single Depth mAP

Abstract

Personalized knowledge about body shape has numerous applications in fashion and clothing, as well as in health monitoring. Whole body 3D scanning presents a relatively simple mechanism for individuals to obtain this information about themselves without needing much knowledge of anthropometry. With current implementations however, scanning devices are large, complex and expensive. In order to make such systems as accessible and widespread as possible, it is necessary to simplify the process and reduce their hardware requirements. Deep learning models have emerged as the leading method of tackling visual tasks, including various aspects of 3D reconstruction. In this paper we demonstrate that by leveraging deep learning it is possible to create very simple whole body scanners that only require a single input depth map to operate. We show that our presented model is able to produce whole body point clouds with an accuracy of 5.19 mm.

Cite

Text

Lunscher and Zelek. "Deep Learning Whole Body Point Cloud Scans from a Single Depth mAP." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018. doi:10.1109/CVPRW.2018.00157

Markdown

[Lunscher and Zelek. "Deep Learning Whole Body Point Cloud Scans from a Single Depth mAP." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2018.](https://mlanthology.org/cvprw/2018/lunscher2018cvprw-deep/) doi:10.1109/CVPRW.2018.00157

BibTeX

@inproceedings{lunscher2018cvprw-deep,
  title     = {{Deep Learning Whole Body Point Cloud Scans from a Single Depth mAP}},
  author    = {Lunscher, Nolan and Zelek, John S.},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2018},
  pages     = {1095-1102},
  doi       = {10.1109/CVPRW.2018.00157},
  url       = {https://mlanthology.org/cvprw/2018/lunscher2018cvprw-deep/}
}