DORi: Discovering Object Relationships for Moment Localization of a Natural Language Query in a Video
Abstract
This paper studies the task of temporal moment localization in a long untrimmed video using natural language query. Given a query sentence, the goal is to determine the start and end of the relevant segment within the video. Our key innovation is to learn a video feature embedding through a language-conditioned message-passing algorithm suitable for temporal moment localization which captures the relationships between humans, objects and activities in the video. These relationships are obtained by a spatial subgraph that contextualized the scene representation using detected objects and human features. Moreover, a temporal sub-graph captures the activities within the video through time. Our method is evaluated on three standard benchmark datasets, and we also introduce YouCookII as a new benchmark for this task. Experiments show our method outperforms state-of-the-art methods on these datasets, confirming the effectiveness of our approach
Cite
Text
Rodriguez-Opazo et al. "DORi: Discovering Object Relationships for Moment Localization of a Natural Language Query in a Video." Winter Conference on Applications of Computer Vision, 2021.Markdown
[Rodriguez-Opazo et al. "DORi: Discovering Object Relationships for Moment Localization of a Natural Language Query in a Video." Winter Conference on Applications of Computer Vision, 2021.](https://mlanthology.org/wacv/2021/rodriguezopazo2021wacv-dori/)BibTeX
@inproceedings{rodriguezopazo2021wacv-dori,
title = {{DORi: Discovering Object Relationships for Moment Localization of a Natural Language Query in a Video}},
author = {Rodriguez-Opazo, Cristian and Marrese-Taylor, Edison and Fernando, Basura and Li, Hongdong and Gould, Stephen},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2021},
pages = {1079-1088},
url = {https://mlanthology.org/wacv/2021/rodriguezopazo2021wacv-dori/}
}