Video Question Answering on Screencast Tutorials

Abstract

This paper presents a new video question answering task on screencast tutorials. We introduce a dataset including question, answer and context triples from the tutorial videos for a software. Unlike other video question answering works, all the answers in our dataset are grounded to the domain knowledge base. An one-shot recognition algorithm is designed to extract the visual cues, which helps enhance the performance of video question answering. We also propose several baseline neural network architectures based on various aspects of video contexts from the dataset. The experimental results demonstrate that our proposed models significantly improve the question answering performances by incorporating multi-modal contexts and domain knowledge.

Cite

Text

Zhao et al. "Video Question Answering on Screencast Tutorials." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/148

Markdown

[Zhao et al. "Video Question Answering on Screencast Tutorials." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/zhao2020ijcai-video/) doi:10.24963/IJCAI.2020/148

BibTeX

@inproceedings{zhao2020ijcai-video,
  title     = {{Video Question Answering on Screencast Tutorials}},
  author    = {Zhao, Wentian and Kim, Seokhwan and Xu, Ning and Jin, Hailin},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {1061-1068},
  doi       = {10.24963/IJCAI.2020/148},
  url       = {https://mlanthology.org/ijcai/2020/zhao2020ijcai-video/}
}