Gaze Estimation from Multimodal Kinect Data
Abstract
This paper addresses the problem of free gaze estima-tion under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to es-timate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leverag-ing on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional esti-mation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demon-strate the great potential of our approach. 1.
Cite
Text
Mora and Odobez. "Gaze Estimation from Multimodal Kinect Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012. doi:10.1109/CVPRW.2012.6239182Markdown
[Mora and Odobez. "Gaze Estimation from Multimodal Kinect Data." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2012.](https://mlanthology.org/cvprw/2012/mora2012cvprw-gaze/) doi:10.1109/CVPRW.2012.6239182BibTeX
@inproceedings{mora2012cvprw-gaze,
title = {{Gaze Estimation from Multimodal Kinect Data}},
author = {Mora, Kenneth Alberto Funes and Odobez, Jean-Marc},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year = {2012},
pages = {25-30},
doi = {10.1109/CVPRW.2012.6239182},
url = {https://mlanthology.org/cvprw/2012/mora2012cvprw-gaze/}
}