Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?

Abstract

Video understanding relies on accurate action detection for temporal analysis. However, existing mainstream methods have limitations in real-world applications due to their offline and closed-set evaluation approaches, as well as their dependence on manual annotations. To address these challenges and enable real-time action understanding in open-world scenarios, we propose OV-OAD, a zero-shot online action detector that leverages vision-language models and learns solely from text supervision. By introducing an object-centered decoder unit into a Transformer-based model, we aggregate frames with similar semantics using video-text correspondence. Extensive experiments on four action detection benchmarks demonstrate that OV-OAD outperforms other advanced zero-shot methods. Specifically, it achieves 37.5\% mean average precision on THUMOS’14 and 73.8\% calibrated average precision on TVSeries. This research establishes a robust baseline for zero-shot transfer in online action detection, enabling scalable solutions for open-world temporal understanding. The code will be available for download at \url{https://github.com/OpenGVLab/OV-OAD}.

Cite

Text

Zhao et al. "Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?." Neural Information Processing Systems, 2024. doi:10.52202/079017-1518

Markdown

[Zhao et al. "Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhao2024neurips-videotext/) doi:10.52202/079017-1518

BibTeX

@inproceedings{zhao2024neurips-videotext,
  title     = {{Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?}},
  author    = {Zhao, Qingsong and Wang, Yi and Xu, Jilan and He, Yinan and Song, Zifan and Wang, Limin and Qiao, Yu and Zhao, Cairong},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1518},
  url       = {https://mlanthology.org/neurips/2024/zhao2024neurips-videotext/}
}