Hashtag Recommendation for Multimodal Microblog Using Co-Attention Network

Abstract

In microblogging services, authors can use hashtags to mark keywords or topics. Many live social media applications (e.g., microblog retrieval, classification) can gain great benefits from these manually labeled tags. However, only a small portion of microblogs contain hashtags inputed by users. Moreover, many microblog posts contain not only textual content but also images. These visual resources also provide valuable information that may not be included in the textual content. So that it can also help to recommend hashtags more accurately. Motivated by the successful use of the attention mechanism, we propose a co-attention network incorporating textual and visual information to recommend hashtags for multimodal tweets. Experimental result on the data collected from Twitter demonstrated that the proposed method can achieve better performance than state-of-the-art methods using textual information only.

Cite

Text

Zhang et al. "Hashtag Recommendation for Multimodal Microblog Using Co-Attention Network." International Joint Conference on Artificial Intelligence, 2017. doi:10.24963/IJCAI.2017/478

Markdown

[Zhang et al. "Hashtag Recommendation for Multimodal Microblog Using Co-Attention Network." International Joint Conference on Artificial Intelligence, 2017.](https://mlanthology.org/ijcai/2017/zhang2017ijcai-hashtag/) doi:10.24963/IJCAI.2017/478

BibTeX

@inproceedings{zhang2017ijcai-hashtag,
  title     = {{Hashtag Recommendation for Multimodal Microblog Using Co-Attention Network}},
  author    = {Zhang, Qi and Wang, Jiawen and Huang, Haoran and Huang, Xuanjing and Gong, Yeyun},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2017},
  pages     = {3420-3426},
  doi       = {10.24963/IJCAI.2017/478},
  url       = {https://mlanthology.org/ijcai/2017/zhang2017ijcai-hashtag/}
}