UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation

Abstract

With the rapid increase of multimedia data, a large body of literature has emerged to work on multimodal summarization, the majority of which target at refining salient information from textual and image modalities to output a pictorial summary with the most relevant images. Existing methods mostly focus on either extractive or abstractive summarization and rely on the presence and quality of image captions to build image references. We are the first to propose a Unified framework for Multimodal Summarization grounding on BART, UniMS, that integrates extractive and abstractive objectives, as well as selecting the image output. Specially, we adopt knowledge distillation from a vision-language pretrained model to improve image selection, which avoids any requirement on the existence and quality of image captions. Besides, we introduce a visual guided decoder to better integrate textual and visual modalities in guiding abstractive text generation. Results show that our best model achieves a new state-of-the-art result on a large-scale benchmark dataset. The newly involved extractive objective as well as the knowledge distillation technique are proven to bring a noticeable improvement to the multimodal summarization task.

Cite

Text

Zhang et al. "UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I10.21431

Markdown

[Zhang et al. "UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/zhang2022aaai-unims/) doi:10.1609/AAAI.V36I10.21431

BibTeX

@inproceedings{zhang2022aaai-unims,
  title     = {{UniMS: A Unified Framework for Multimodal Summarization with Knowledge Distillation}},
  author    = {Zhang, Zhengkun and Meng, Xiaojun and Wang, Yasheng and Jiang, Xin and Liu, Qun and Yang, Zhenglu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {11757-11764},
  doi       = {10.1609/AAAI.V36I10.21431},
  url       = {https://mlanthology.org/aaai/2022/zhang2022aaai-unims/}
}