RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER
Abstract
Recently multimodal named entity recognition (MNER) has utilized images to improve the accuracy of NER in tweets. However, most of the multimodal methods use attention mechanisms to extract visual clues regardless of whether the text and image are relevant. Practically, the irrelevant text-image pairs account for a large proportion in tweets. The visual clues that are unrelated to the texts will exert uncertain or even negative effects on multimodal model learning. In this paper, we introduce a method of text-image relation propagation into the multimodal BERT model. We integrate soft or hard gates to select visual clues and propose a multitask algorithm to train and validate the effects of relation propagation on the MNER datasets. In the experiments, we deeply analyze the changes in visual attention before and after the use of relation propagation. Our model achieves state-of-the-art performance on the MNER datasets.
Cite
Text
Sun et al. "RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER." AAAI Conference on Artificial Intelligence, 2021. doi:10.1609/AAAI.V35I15.17633Markdown
[Sun et al. "RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER." AAAI Conference on Artificial Intelligence, 2021.](https://mlanthology.org/aaai/2021/sun2021aaai-rpbert/) doi:10.1609/AAAI.V35I15.17633BibTeX
@inproceedings{sun2021aaai-rpbert,
title = {{RpBERT: A Text-Image Relation Propagation-Based BERT Model for Multimodal NER}},
author = {Sun, Lin and Wang, Jiquan and Zhang, Kai and Su, Yindu and Weng, Fangsheng},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2021},
pages = {13860-13868},
doi = {10.1609/AAAI.V35I15.17633},
url = {https://mlanthology.org/aaai/2021/sun2021aaai-rpbert/}
}