Head Nod Detection from a Full 3D Model

Abstract

As a non-verbal communication mean, head gestures play an important role in face-to-face conversation and recognizing them is therefore of high value for social behavior analysis or Human Robotic Interactions (HRI) modelling. Among the various gestures, head nod is the most common one and can convey agreement or emphasis. In this paper, we propose a novel nod detection approach based on a full 3D face centered rotation model. Compared to previous approaches, we make two contributions. Firstly, the head rotation dynamic is computed within the head coordinate instead of the camera coordinate, leading to pose invariant gesture dynamics. Secondly, besides the rotation parame- ters, a feature related to the head rotation axis is proposed so that nod-like false positives due to body movements could be eliminated. The experiments on two-party and four-party conversations demonstrate the validity of the approach.

Cite

Text

Chen et al. "Head Nod Detection from a Full 3D Model." IEEE/CVF International Conference on Computer Vision Workshops, 2015. doi:10.1109/ICCVW.2015.75

Markdown

[Chen et al. "Head Nod Detection from a Full 3D Model." IEEE/CVF International Conference on Computer Vision Workshops, 2015.](https://mlanthology.org/iccvw/2015/chen2015iccvw-head/) doi:10.1109/ICCVW.2015.75

BibTeX

@inproceedings{chen2015iccvw-head,
  title     = {{Head Nod Detection from a Full 3D Model}},
  author    = {Chen, Yiqiang and Yu, Yu and Odobez, Jean-Marc},
  booktitle = {IEEE/CVF International Conference on Computer Vision Workshops},
  year      = {2015},
  pages     = {528-536},
  doi       = {10.1109/ICCVW.2015.75},
  url       = {https://mlanthology.org/iccvw/2015/chen2015iccvw-head/}
}