A Dataset for Movie Description
Abstract
Audio Description (AD) provides linguistic descriptions of movies and allows visually impaired people to follow a movie along with their peers. Such descriptions are by design mainly visual and thus naturally form an interesting data source for computer vision and computational linguistics. In this work we propose a novel dataset which contains transcribed ADs, which are temporally aligned to full length HD movies. In addition we also collected the aligned movie scripts which have been used in prior work and compare the two different sources of descriptions. In total the MPII Movie Description dataset (MPII-MD) contains a parallel corpus of over 68K sentences and video snippets from 94 HD movies. We characterize the dataset by benchmarking different approaches for generating video descriptions. Comparing ADs to scripts, we find that ADs are far more visual and describe precisely what is shown rather than what should happen according to the scripts created prior to movie production.
Cite
Text
Rohrbach et al. "A Dataset for Movie Description." Conference on Computer Vision and Pattern Recognition, 2015. doi:10.1109/CVPR.2015.7298940Markdown
[Rohrbach et al. "A Dataset for Movie Description." Conference on Computer Vision and Pattern Recognition, 2015.](https://mlanthology.org/cvpr/2015/rohrbach2015cvpr-dataset/) doi:10.1109/CVPR.2015.7298940BibTeX
@inproceedings{rohrbach2015cvpr-dataset,
title = {{A Dataset for Movie Description}},
author = {Rohrbach, Anna and Rohrbach, Marcus and Tandon, Niket and Schiele, Bernt},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2015},
doi = {10.1109/CVPR.2015.7298940},
url = {https://mlanthology.org/cvpr/2015/rohrbach2015cvpr-dataset/}
}