Auto-Encoding Morph-Tokens for Multimodal LLM

Abstract

For multimodal LLMs, the synergy of visual comprehension (textual output) and generation (visual output) presents an ongoing challenge. This is due to a conflicting objective: for comprehension, an MLLM needs to abstract the visuals; for generation, it needs to preserve the visuals as much as possible. Thus, the objective is a dilemma for visual-tokens. To resolve the conflict, we propose encoding images into morph-tokens to serve a dual purpose: for comprehension, they act as visual prompts instructing MLLM to generate texts; for generation, they take on a different, non-conflicting role as complete visual-tokens for image reconstruction, where the missing visual cues are recovered by the MLLM. Extensive experiments show that morph-tokens can achieve a new SOTA for multimodal comprehension and generation simultaneously. Our project is available at https://github.com/DCDmllm/MorphTokens.

Cite

Text

Pan et al. "Auto-Encoding Morph-Tokens for Multimodal LLM." International Conference on Machine Learning, 2024.

Markdown

[Pan et al. "Auto-Encoding Morph-Tokens for Multimodal LLM." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/pan2024icml-autoencoding/)

BibTeX

@inproceedings{pan2024icml-autoencoding,
  title     = {{Auto-Encoding Morph-Tokens for Multimodal LLM}},
  author    = {Pan, Kaihang and Tang, Siliang and Li, Juncheng and Fan, Zhaoyu and Chow, Wei and Yan, Shuicheng and Chua, Tat-Seng and Zhuang, Yueting and Zhang, Hanwang},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {39308-39323},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/pan2024icml-autoencoding/}
}