Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models

Abstract

Sparse Mixture-of-Experts (MoE) is a neural architecture design that adds learnable parameters to Large Language Models (LLMs) without increasing computational complexity (FLOPs). Instruction tuning is a technique for training LLMs to follow instructions. We advocate combining these two approaches, as we find that MoE models benefit more from instruction tuning than dense models. In particular, we conduct empirical studies across three experimental setups: (i) Direct finetuning on individual downstream tasks devoid of instruction tuning; (ii) Instruction tuning followed by in-context few-shot or zero-shot generalization on downstream tasks; and (iii) Instruction tuning supplemented by further finetuning on individual downstream tasks. In the first scenario, MoE models overall underperform dense models of identical computational capacity. This narrative, however, dramatically changes with the introduction of instruction tuning (in the second and third scenarios), used independently or in conjunction with task-specific finetuning. Our most powerful model, FLAN-MoE-32B, surpasses the performance of Flan-PaLM-62B on four benchmark tasks, while using only a third of the FLOPs. The advancements embodied by FLAN-MoE inspire a reevaluation of the design principles of large-scale, high-performance language models in the framework of task-agnostic learning.

Cite

Text

Shen et al. "Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models." International Conference on Learning Representations, 2024.

Markdown

[Shen et al. "Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/shen2024iclr-mixtureofexperts/)

BibTeX

@inproceedings{shen2024iclr-mixtureofexperts,
  title     = {{Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models}},
  author    = {Shen, Sheng and Hou, Le and Zhou, Yanqi and Du, Nan and Longpre, Shayne and Wei, Jason and Chung, Hyung Won and Zoph, Barret and Fedus, William and Chen, Xinyun and Vu, Tu and Wu, Yuexin and Chen, Wuyang and Webson, Albert and Li, Yunxuan and Zhao, Vincent Y and Yu, Hongkun and Keutzer, Kurt and Darrell, Trevor and Zhou, Denny},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/shen2024iclr-mixtureofexperts/}
}