Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding

Abstract

Weakly Supervised temporal Article Grounding (WSAG) is a challenging and practical task in video understanding. Specifically, given a video and a relevant article, whose sentences are at different semantic scales, WSAG aims to localize corresponding video segments for all “groundable” sentences. Compared to other grounding tasks, e.g., localizing one target segment with respect to a given sentence query, WSAG confronts an essential obstacle rooted in the intricate multi-scale information inherent within both textual and visual modalities. Existing methods overlook the modeling and alignment of such structured information present in multi-scale video segments and hierarchical textual content. To this end, we propose a Multi-Scale Video-Text Correspondence Learning (MVTCL) framework, which enhances the grounding performance in complex scenes by modeling multi-scale semantic correspondence both within and between modalities. Specifically, MVTCL initially aggregates video content spanning distinct temporal scales and leverages hierarchical textual relationships in both temporal and semantic dimensions via a semantic calibration module. Then multi-scale contrastive learning module is introduced to generate more discriminative representations by selecting typical contexts and performing inter-video contrastive learning. Through the multi-scale semantic calibration architecture and supervision design, our method achieves new state-of-the-art performance on existing WSAG benchmarks.

Cite

Text

Geng et al. "Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I3.27959

Markdown

[Geng et al. "Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/geng2024aaai-learning/) doi:10.1609/AAAI.V38I3.27959

BibTeX

@inproceedings{geng2024aaai-learning,
  title     = {{Learning Multi-Scale Video-Text Correspondence for Weakly Supervised Temporal Article Gronding}},
  author    = {Geng, Wenjia and Liu, Yong and Chen, Lei and Wang, Sujia and Zhou, Jie and Tang, Yansong},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {1896-1904},
  doi       = {10.1609/AAAI.V38I3.27959},
  url       = {https://mlanthology.org/aaai/2024/geng2024aaai-learning/}
}