QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration

Abstract

The deployment of mixture-of-experts (MoE) large language models (LLMs) presents significant challenges due to their high memory demands. These challenges become even more pronounced in multi-tenant environments, where shared resources must accommodate multiple models, limiting the effectiveness of conventional virtualization techniques. This paper addresses the problem of efficiently serving multiple fine-tuned MoE-LLMs on a single GPU. We propose a serving system that employs similarity-based expert consolidation to reduce the overall memory footprint by sharing similar experts across models. To ensure output quality, we introduce runtime partial reconfiguration, dynamically replacing non-expert layers when processing requests from different models. As a result, our approach achieves competitive output quality while maintaining throughput comparable to serving a single model, and incurs only a negligible increase in time-to-first-token (TTFT). Experiments on a server with a single NVIDIA A100 GPU (80GB) using Mixtral-8x7B models demonstrate an 85% average reduction in turnaround time compared to NVIDIA’s multi-instance GPU (MIG). Furthermore, experiments on Google’s Switch Transformer Base-8 model with up to four variants demonstrate the scalability and resilience of our approach in maintaining output quality compared to other model merging baselines, highlighting its effectiveness.

Cite

Text

Imani et al. "QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Imani et al. "QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/imani2025icml-qosefficient/)

BibTeX

@inproceedings{imani2025icml-qosefficient,
  title     = {{QoS-Efficient Serving of Multiple Mixture-of-Expert LLMs Using Partial Runtime Reconfiguration}},
  author    = {Imani, Hamidreza and Peng, Jiaxin and Mohseni, Peiman and Amirany, Abdolah and El-Ghazawi, Tarek},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {26433-26445},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/imani2025icml-qosefficient/}
}