Photorealistic 3D Face Modeling on a Smartphone
Abstract
In this paper, we propose an efficient method for creating a photorealistic 3D face model on a smartphone. Major features of human face such as eyes, nose, lip, cheek, chin, and profile boundary are extracted automatically from the front and profile images, in which ACM (active contour model) and deformable ICP (iterative closest point) methods are employed. A 3D face model is generated by deforming a generic model so that the 3D face model is correctly corresponded to the extracted facial features. Skin texture map is created from the input image, which is mapped on the deformed 3D face model. All procedures are implemented and optimized efficiently on a smartphone with limited processing power and memory capability. Experimental results show that photorealistic 3D face models are created successfully on a variety of test samples. It takes about 6 seconds on an off-the-shelf smartphone.
Cite
Text
Lee et al. "Photorealistic 3D Face Modeling on a Smartphone." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2011. doi:10.1109/CVPRW.2011.5981841Markdown
[Lee et al. "Photorealistic 3D Face Modeling on a Smartphone." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2011.](https://mlanthology.org/cvprw/2011/lee2011cvprw-photorealistic/) doi:10.1109/CVPRW.2011.5981841BibTeX
@inproceedings{lee2011cvprw-photorealistic,
title = {{Photorealistic 3D Face Modeling on a Smartphone}},
author = {Lee, Won Beom and Lee, Man Hee and Park, In Kyu},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2011},
pages = {163-168},
doi = {10.1109/CVPRW.2011.5981841},
url = {https://mlanthology.org/cvprw/2011/lee2011cvprw-photorealistic/}
}