Detours for Navigating Instructional Videos

Abstract

We introduce the video detours problem for navigating instructional videos. Given a source video and a natural language query asking to alter the how-to video's current path of execution in a certain way the goal is to find a related "detour video" that satisfies the requested alteration. To address this challenge we propose VidDetours a novel video-language approach that learns to retrieve the targeted temporal segments from a large repository of how-to's using video-and-text conditioned queries. Furthermore we devise a language-based pipeline that exploits how-to video narration text to create weakly supervised training data. We demonstrate our idea applied to the domain of how-to cooking videos where a user can detour from their current recipe to find steps with alternate ingredients tools and techniques. Validating on a ground truth annotated dataset of 16K samples we show our model's significant improvements over best available methods for video retrieval and question answering with recall rates exceeding the state of the art by 35%.

Cite

Text

Ashutosh et al. "Detours for Navigating Instructional Videos." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01779

Markdown

[Ashutosh et al. "Detours for Navigating Instructional Videos." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/ashutosh2024cvpr-detours/) doi:10.1109/CVPR52733.2024.01779

BibTeX

@inproceedings{ashutosh2024cvpr-detours,
  title     = {{Detours for Navigating Instructional Videos}},
  author    = {Ashutosh, Kumar and Xue, Zihui and Nagarajan, Tushar and Grauman, Kristen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2024},
  pages     = {18804-18815},
  doi       = {10.1109/CVPR52733.2024.01779},
  url       = {https://mlanthology.org/cvpr/2024/ashutosh2024cvpr-detours/}
}