A Comprehensive Overhaul of Multimodal Assistant with Small Language Models

Abstract

Multimodal Large Language Models (MLLMs) have showcased impressive skills in tasks related to visual understanding and reasoning. Yet, their widespread application faces obstacles due to the high computational demands during both the training and inference phases, restricting their use to a limited audience within the research and user communities. In this paper, we investigate the design aspects of Multimodal Small Language Models (MSLMs) and propose an efficient multimodal assistant named Mipha, which is designed to create synergy among various aspects: visual representation, language models, and optimization strategies. We show that without increasing the volume of training data, our Mipha-3B outperforms the state-of-the-art large MLLMs, especially LLaVA-1.5-13B, on multiple benchmarks. Through detailed discussion, we provide insights and guidelines for developing strong MSLMs that rival the capabilities of MLLMs.

Cite

Text

Zhu et al. "A Comprehensive Overhaul of Multimodal Assistant with Small Language Models." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I10.33194

Markdown

[Zhu et al. "A Comprehensive Overhaul of Multimodal Assistant with Small Language Models." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/zhu2025aaai-comprehensive/) doi:10.1609/AAAI.V39I10.33194

BibTeX

@inproceedings{zhu2025aaai-comprehensive,
  title     = {{A Comprehensive Overhaul of Multimodal Assistant with Small Language Models}},
  author    = {Zhu, Minjie and Zhu, Yichen and Liu, Ning and Liu, Xin and Xu, Zhiyuan and Shen, Chaomin and Peng, Yaxin},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2025},
  pages     = {10986-10994},
  doi       = {10.1609/AAAI.V39I10.33194},
  url       = {https://mlanthology.org/aaai/2025/zhu2025aaai-comprehensive/}
}