Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network
Abstract
Given some video-query pairs with untrimmed videos and sentence queries, temporal sentence grounding (TSG) aims to locate query-relevant segments in these videos. Although previous respectable TSG methods have achieved remarkable success, they train each video-query pair separately and ignore the relationship between different pairs. To this end, in this paper, we pose a brand-new setting: Multi-Pair TSG, which aims to co-train these pairs. We propose a novel video-query co-training approach, Multi-Thread Knowledge Transfer Network, to locate a variety of video-query pairs effectively and efficiently. Firstly, we mine the spatial and temporal semantics across different queries to cooperate with each other. To learn intra- and inter-modal representations simultaneously, we design a cross-modal contrast module to explore the semantic consistency by a self-supervised strategy. To fully align visual and textual representations between different pairs, we design a prototype alignment strategy to 1) match object prototypes and phrase prototypes for spatial alignment, and 2) align activity prototypes and sentence prototypes for temporal alignment. Finally, we develop an adaptive negative selection module to adaptively generate a threshold for cross-modal matching. Extensive experiments show the effectiveness and efficiency of our proposed method.
Cite
Text
Fang et al. "Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I3.32298Markdown
[Fang et al. "Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/fang2025aaai-multi/) doi:10.1609/AAAI.V39I3.32298BibTeX
@inproceedings{fang2025aaai-multi,
title = {{Multi-Pair Temporal Sentence Grounding via Multi-Thread Knowledge Transfer Network}},
author = {Fang, Xiang and Fang, Wanlong and Wang, Changshuo and Liu, Daizong and Tang, Keke and Dong, Jianfeng and Zhou, Pan and Li, Beibei},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {2915-2923},
doi = {10.1609/AAAI.V39I3.32298},
url = {https://mlanthology.org/aaai/2025/fang2025aaai-multi/}
}