Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment
Abstract
Text-video retrieval is a challenging cross-modal task, which aims to align visual entities with natural language descriptions. Current methods either fail to leverage the local details or are computationally expensive. What's worse, they fail to leverage the heterogeneous concepts in data. In this paper, we propose the Disentangled Conceptualization and Set-to-set Alignment (DiCoSA) to simulate the conceptualizing and reasoning process of human beings. For disentangled conceptualization, we divide the coarse feature into multiple latent factors related to semantic concepts. For set-to-set alignment, where a set of visual concepts correspond to a set of textual concepts, we propose an adaptive pooling method to aggregate semantic concepts to address the partial matching. In particular, since we encode concepts independently in only a few dimensions, DiCoSA is superior at efficiency and granularity, ensuring fine-grained interactions using a similar computational complexity as coarse-grained alignment. Extensive experiments on five datasets, including MSR-VTT, LSMDC, MSVD, ActivityNet, and DiDeMo, demonstrate that our method outperforms the existing state-of-the-art methods.
Cite
Text
Jin et al. "Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment." International Joint Conference on Artificial Intelligence, 2023. doi:10.24963/IJCAI.2023/104Markdown
[Jin et al. "Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment." International Joint Conference on Artificial Intelligence, 2023.](https://mlanthology.org/ijcai/2023/jin2023ijcai-text/) doi:10.24963/IJCAI.2023/104BibTeX
@inproceedings{jin2023ijcai-text,
title = {{Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set Alignment}},
author = {Jin, Peng and Li, Hao and Cheng, Zesen and Huang, Jinfa and Wang, Zhennan and Yuan, Li and Liu, Chang and Chen, Jie},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2023},
pages = {938-946},
doi = {10.24963/IJCAI.2023/104},
url = {https://mlanthology.org/ijcai/2023/jin2023ijcai-text/}
}