EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval
Abstract
In Composed Video Retrieval, a video and a textual description which modifies the video content are provided as inputs to the model. The aim is to retrieve the relevant video with the modified content from a database of videos. In this challenging task, the first step is to acquire large-scale training datasets and collect high-quality benchmarks for evaluation. In this work, we introduce , a new evaluation benchmark for fine-grained Composed Video Retrieval using large-scale egocentric video datasets. consists of 2,295 queries that specifically focus on high-quality temporal video understanding. We find that existing Composed Video Retrieval frameworks do not achieve the necessary high-quality temporal video understanding for this task. To address this shortcoming, we adapt a simple training-free method, propose a generic re-ranking framework for Composed Video Retrieval, and demonstrate that this achieves strong results on . Our code and benchmark are freely available at https://github.com/ ExplainableML/EgoCVR.
Cite
Text
Hummel et al. "EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72913-3_1Markdown
[Hummel et al. "EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/hummel2024eccv-egocvr/) doi:10.1007/978-3-031-72913-3_1BibTeX
@inproceedings{hummel2024eccv-egocvr,
title = {{EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval}},
author = {Hummel, Thomas and Karthik, Shyamgopal and Georgescu, Mariana-Iuliana and Akata, Zeynep},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72913-3_1},
url = {https://mlanthology.org/eccv/2024/hummel2024eccv-egocvr/}
}