Context-Aware Integration of Language and Visual References for Natural Language Tracking
Abstract
Tracking by natural language specification (TNL) aims to consistently localize a target in a video sequence given a linguistic description in the initial frame. Existing methodologies perform language-based and template-based matching for target reasoning separately and merge the matching results from two sources which suffer from tracking drift when language and visual templates miss-align with the dynamic target state and ambiguity in the later merging stage. To tackle the issues we propose a joint multi-modal tracking framework with 1) a prompt modulation module to leverage the complementarity between temporal visual templates and language expressions enabling precise and context-aware appearance and linguistic cues and 2) a unified target decoding module to integrate the multi-modal reference cues and executes the integrated queries on the search image to predict the target location in an end-to-end manner directly. This design ensures spatio-temporal consistency by leveraging historical visual information and introduces an integrated solution generating predictions in a single step. Extensive experiments conducted on TNL2K OTB-Lang LaSOT and RefCOCOg validate the efficacy of our proposed approach. The results demonstrate competitive performance against state-of-the-art methods for both tracking and grounding. Code is available at https://github.com/twotwo2/QueryNLT
Cite
Text
Shao et al. "Context-Aware Integration of Language and Visual References for Natural Language Tracking." Conference on Computer Vision and Pattern Recognition, 2024. doi:10.1109/CVPR52733.2024.01817Markdown
[Shao et al. "Context-Aware Integration of Language and Visual References for Natural Language Tracking." Conference on Computer Vision and Pattern Recognition, 2024.](https://mlanthology.org/cvpr/2024/shao2024cvpr-contextaware/) doi:10.1109/CVPR52733.2024.01817BibTeX
@inproceedings{shao2024cvpr-contextaware,
title = {{Context-Aware Integration of Language and Visual References for Natural Language Tracking}},
author = {Shao, Yanyan and He, Shuting and Ye, Qi and Feng, Yuchao and Luo, Wenhan and Chen, Jiming},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2024},
pages = {19208-19217},
doi = {10.1109/CVPR52733.2024.01817},
url = {https://mlanthology.org/cvpr/2024/shao2024cvpr-contextaware/}
}