Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training

Abstract

Current multimodal language models (MLMs) evaluation and training approaches overlook the influence of instruction format, presenting an elephant-in-the-room problem. Previous research deals with this problem by manually crafting instructions, failing to yield significant insights due to limitations in diversity and scalability. In this work, we propose a programmatic instruction template generator capable of producing over 3.9B unique template combinations by filling randomly sampled positional synonyms into weighted sampled meta templates, enabling us to comprehensively examine the MLM's performance across diverse instruction templates. Our experiments across eight common MLMs on five benchmark datasets reveal that MLMs have high template sensitivities with at most 29% performance gaps between different templates. We further augment the instruction tuning dataset of LLaVA-1.5 with our template generator and perform instruction tuning on LLaVA-1.5-7B and LLaVA-1.5-13B. Models tuned on our augmented dataset achieve the best overall performance when compared with the same scale MLMs tuned on at most 75 times the scale of our augmented dataset, highlighting the importance of instruction templates in MLM training.

Cite

Text

Wang et al. "Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training." ICLR 2025 Workshops: Data_Problems, 2025.

Markdown

[Wang et al. "Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training." ICLR 2025 Workshops: Data_Problems, 2025.](https://mlanthology.org/iclrw/2025/wang2025iclrw-template/)

BibTeX

@inproceedings{wang2025iclrw-template,
  title     = {{Template Matters: Understanding the Role of Instruction Templates in Multimodal Language Model Evaluation and Training}},
  author    = {Wang, Shijian and Song, Linxin and Zhang, Jieyu and Shimizu, Ryotaro and Luo, Ao and Yao, Li and Chen, Cunjian and McAuley, Julian and Wu, Hanqian},
  booktitle = {ICLR 2025 Workshops: Data_Problems},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/wang2025iclrw-template/}
}