Face-Off: A Face Reconstruction Technique for Virtual Reality (VR) Scenarios
Abstract
Virtual Reality (VR) headsets occlude a significant portion of human face. The real human face is required in many VR applications, for example, video teleconferencing. This paper proposes a wearable camera setup-based solution to reconstruct the real face of a person wearing VR headset. Our solution lies in the core of asymmetrical principal component analysis (aPCA). A user-specific training model is built using aPCA with full face, lips and eye region information. During testing phase, lower face region and partial eye information is used to reconstruct the wearer face. Online testing session consists of two phases, (i) calibration phase and (ii) reconstruction phase. In former, a small calibration step is performed to align test information with training data, while the later uses half face information to reconstruct the full face using aPCA-based trained-data. The proposed approach is validated with qualitative and quantitative analysis.
Cite
Text
Khan et al. "Face-Off: A Face Reconstruction Technique for Virtual Reality (VR) Scenarios." European Conference on Computer Vision, 2016. doi:10.1007/978-3-319-46604-0_35Markdown
[Khan et al. "Face-Off: A Face Reconstruction Technique for Virtual Reality (VR) Scenarios." European Conference on Computer Vision, 2016.](https://mlanthology.org/eccv/2016/khan2016eccv-face/) doi:10.1007/978-3-319-46604-0_35BibTeX
@inproceedings{khan2016eccv-face,
title = {{Face-Off: A Face Reconstruction Technique for Virtual Reality (VR) Scenarios}},
author = {Khan, Muhammad Sikandar Lal and Réhman, Shafiq ur and Söderström, Ulrik and Halawani, Alaa and Li, Haibo},
booktitle = {European Conference on Computer Vision},
year = {2016},
pages = {490-503},
doi = {10.1007/978-3-319-46604-0_35},
url = {https://mlanthology.org/eccv/2016/khan2016eccv-face/}
}