Vid2Int: Detecting Implicit Intention from Long Dialog Videos

Abstract

Detecting subtle intention such as deception and subtext of a person in a long dialog video, or implicit intention detection (IID), is a challenging problem. The transcript (textual cues) often reveals little, so audio-visual cues including voice tone as well as facial and body behaviour are the main focuses for automated IID. Contextual cues are also crucial, since a person's implicit intentions are often correlated and context-dependent when the person moves from one question-answer pair to the next. However, no such dataset exists which contains fine-grained question-answer pair (video segment) level annotation. The first contribution of this work is thus a new benchmark dataset, called Vid2Int-Deception to fill this gap. A novel multi-grain representation model is also proposed to capture the subtle movement changes of eyes, face, and body (relevant for inferring intention) from a long dialog video. Moreover, to model the temporal correlation between the implicit intentions across video segments, we propose a Video-to-Intention network (Vid2Int) based on attentive recurrent neural network (RNN). Extensive experiments show that our model achieves state-of-the-art results.

Cite

Text

Xu et al. "Vid2Int: Detecting Implicit Intention from Long Dialog Videos." Winter Conference on Applications of Computer Vision, 2021.

Markdown

[Xu et al. "Vid2Int: Detecting Implicit Intention from Long Dialog Videos." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/xu2021wacv-vid2int/)

BibTeX

@inproceedings{xu2021wacv-vid2int,
  title     = {{Vid2Int: Detecting Implicit Intention from Long Dialog Videos}},
  author    = {Xu, Xiaoli and Lu, Yao and Lu, Zhiwu and Xiang, Tao},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2021},
  pages     = {3299-3308},
  url       = {https://mlanthology.org/wacv/2021/xu2021wacv-vid2int/}
}