Zero-Shot Natural Language Video Localization

Abstract

Understanding videos to localize moments with natural language often requires large expensive annotated video regions paired with language queries. To eliminate the annotation costs, we make a first attempt to train a natural language video localization model in zero-shot manner. Inspired by unsupervised image captioning setup, we merely require random text corpora, unlabeled video collections, and an off-the-shelf object detector to train a model. With the unrelated and unpaired data, we propose to generate pseudo-supervision of candidate temporal regions and corresponding query sentences, and develop a simple NLVL model to train with the pseudo-supervision. Our empirical validations show that the proposed pseudo-supervised method outperforms several baseline approaches and a number of methods using stronger supervision on Charades-STA and ActivityNet-Captions.

Cite

Text

Nam et al. "Zero-Shot Natural Language Video Localization." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00150

Markdown

[Nam et al. "Zero-Shot Natural Language Video Localization." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/nam2021iccv-zeroshot/) doi:10.1109/ICCV48922.2021.00150

BibTeX

@inproceedings{nam2021iccv-zeroshot,
  title     = {{Zero-Shot Natural Language Video Localization}},
  author    = {Nam, Jinwoo and Ahn, Daechul and Kang, Dongyeop and Ha, Seong Jong and Choi, Jonghyun},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {1470-1479},
  doi       = {10.1109/ICCV48922.2021.00150},
  url       = {https://mlanthology.org/iccv/2021/nam2021iccv-zeroshot/}
}