TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Abstract
We introduce TV show Retrieval (TVR), a new multimodal retrieval dataset. TVR requires systems to understand both videos and their associated subtitle (dialogue) texts, making it more realistic. The dataset contains 109K queries collected on 21.8K videos from 6 TV shows of diverse genres, where each query is associated with a tight temporal window. The queries are also labeled with query types that indicate whether each of them is more related to video or subtitle or both, allowing for in-depth analysis of the dataset and the methods that built on top of it. Strict qualification and post-annotation verification tests are applied to ensure the quality of the collected data. Additionally, we present several baselines and a novel Cross-modal Moment Localization (XML) network for multimodal moment retrieval tasks. The proposed XML model uses a late fusion design with a novel Convolutional StartEnd detector (ConvSE), surpassing baselines by a large margin and with better efficiency, providing a strong starting point for future work.
Cite
Text
Lei et al. "TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58589-1_27Markdown
[Lei et al. "TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/lei2020eccv-tvr/) doi:10.1007/978-3-030-58589-1_27BibTeX
@inproceedings{lei2020eccv-tvr,
title = {{TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval}},
author = {Lei, Jie and Yu, Licheng and Berg, Tamara L. and Bansal, Mohit},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58589-1_27},
url = {https://mlanthology.org/eccv/2020/lei2020eccv-tvr/}
}