Fine-Grained Bidirectional Attention-Based Generative Networks for Image-Text Matching
Abstract
In this paper, we propose a method called BiKA (Bidirectional Knowledge-assisted embedding and Attention-based generation) for the task of image-text matching. It mainly improves the embedding ability of images and texts from two aspects: first, modality conversion, we build a bidirectional image and text generation network to explore the positive effect of mutual conversion between modalities on image-text feature embedding; then is relational dependency, we built a bidirectional graph convolutional neural network to establish the dependency between objects, introduce non-Euclidean data into image-text fine-grained matching to explore the positive effect of this dependency on fine-grained embedding of images and texts. Experiments on two public datasets show that the performance of our method is significantly improved compared to many state-of-the-art models.
Cite
Text
Li et al. "Fine-Grained Bidirectional Attention-Based Generative Networks for Image-Text Matching." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022. doi:10.1007/978-3-031-26409-2_24Markdown
[Li et al. "Fine-Grained Bidirectional Attention-Based Generative Networks for Image-Text Matching." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2022.](https://mlanthology.org/ecmlpkdd/2022/li2022ecmlpkdd-finegrained/) doi:10.1007/978-3-031-26409-2_24BibTeX
@inproceedings{li2022ecmlpkdd-finegrained,
title = {{Fine-Grained Bidirectional Attention-Based Generative Networks for Image-Text Matching}},
author = {Li, Zhixin and Zhu, Jianwei and Wei, Jiahui and Zeng, Yufei},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2022},
pages = {390-406},
doi = {10.1007/978-3-031-26409-2_24},
url = {https://mlanthology.org/ecmlpkdd/2022/li2022ecmlpkdd-finegrained/}
}