Hierarchical Aligned Multimodal Learning for NER on Tweet Posts

Abstract

Mining structured knowledge from tweets using named entity recognition (NER) can be beneficial for many downstream applications such as recommendation and intention under standing. With tweet posts tending to be multimodal, multimodal named entity recognition (MNER) has attracted more attention. In this paper, we propose a novel approach, which can dynamically align the image and text sequence and achieve the multi-level cross-modal learning to augment textual word representation for MNER improvement. To be specific, our framework can be split into three main stages: the first stage focuses on intra-modality representation learning to derive the implicit global and local knowledge of each modality, the second evaluates the relevance between the text and its accompanying image and integrates different grained visual information based on the relevance, the third enforces semantic refinement via iterative cross-modal interactions and co-attention. We conduct experiments on two open datasets, and the results and detailed analysis demonstrate the advantage of our model.

Cite

Text

Liu et al. "Hierarchical Aligned Multimodal Learning for NER on Tweet Posts." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I17.29831

Markdown

[Liu et al. "Hierarchical Aligned Multimodal Learning for NER on Tweet Posts." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/liu2024aaai-hierarchical/) doi:10.1609/AAAI.V38I17.29831

BibTeX

@inproceedings{liu2024aaai-hierarchical,
  title     = {{Hierarchical Aligned Multimodal Learning for NER on Tweet Posts}},
  author    = {Liu, Peipei and Li, Hong and Ren, Yimo and Liu, Jie and Si, Shuaizong and Zhu, Hongsong and Sun, Limin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2024},
  pages     = {18680-18688},
  doi       = {10.1609/AAAI.V38I17.29831},
  url       = {https://mlanthology.org/aaai/2024/liu2024aaai-hierarchical/}
}