Multi-Query Video Retrieval

Abstract

Retrieving target videos based on text descriptions is a task of great practical value and has received increasing attention over the past few years. Despite recent progress, imperfect annotations in existing video retrieval datasets have posed significant challenges on model evaluation and development. In this paper, we tackle this issue by focusing on the less-studied setting of multi-query video retrieval, where multiple descriptions are provided to the model for searching over the video archive. We first show that multi-query retrieval task effectively mitigates the dataset noise introduced by imperfect annotations and better correlates with human judgement on evaluating retrieval abilities of current models. We then investigate several methods which leverage multiple queries at training time, and demonstrate that the multi-query inspired training can lead to superior performance and better generalization. We hope further investigation in this direction can bring new insights on building systems that perform better in real-world video retrieval applications.

Cite

Text

Wang et al. "Multi-Query Video Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-19781-9_14

Markdown

[Wang et al. "Multi-Query Video Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/wang2022eccv-multiquery/) doi:10.1007/978-3-031-19781-9_14

BibTeX

@inproceedings{wang2022eccv-multiquery,
  title     = {{Multi-Query Video Retrieval}},
  author    = {Wang, Zeyu and Wu, Yu and Narasimhan, Karthik and Russakovsky, Olga},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-19781-9_14},
  url       = {https://mlanthology.org/eccv/2022/wang2022eccv-multiquery/}
}