Learning Multimodal Word Representation via Dynamic Fusion Methods
Abstract
Multimodal models have been proven to outperform text-based models on learning semantic word representations. Almost all previous multimodal models typically treat the representations from different modalities equally. However, it is obvious that information from different modalities contributes differently to the meaning of words. This motivates us to build a multimodal model that can dynamically fuse the semantic representations from different modalities according to different types of words. To that end, we propose three novel dynamic fusion methods to assign importance weights to each modality, in which weights are learned under the weak supervision of word association pairs. The extensive experiments have demonstrated that the proposed methods outperform strong unimodal baselines and state-of-the-art multimodal models.
Cite
Text
Wang et al. "Learning Multimodal Word Representation via Dynamic Fusion Methods." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.12031Markdown
[Wang et al. "Learning Multimodal Word Representation via Dynamic Fusion Methods." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/wang2018aaai-learning-b/) doi:10.1609/AAAI.V32I1.12031BibTeX
@inproceedings{wang2018aaai-learning-b,
title = {{Learning Multimodal Word Representation via Dynamic Fusion Methods}},
author = {Wang, Shaonan and Zhang, Jiajun and Zong, Chengqing},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2018},
pages = {5973-5980},
doi = {10.1609/AAAI.V32I1.12031},
url = {https://mlanthology.org/aaai/2018/wang2018aaai-learning-b/}
}