Efficiently Serving Large Multimodal Models Using EPD Disaggregation

Abstract

Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by handling diverse inputs such as images, audio, and video, but at the cost of adding a multimodal encoding stage that increases both computational and memory overhead. This step negatively affects key Service Level Objectives (SLOs), such as time to first token (TTFT) and time per output token (TPOT). We introduce Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates the encoding, prefill, and decode stages onto dedicated resources. Unlike current systems, which bundle encoding and prefill together, our approach decouples these steps, unlocking new opportunities and optimizations. These include a mechanism to cache multimedia tokens for efficient transfer, a novel way to parallelize the encoding load within a request, a module for optimal resource allocation for disaggregated serving, and a novel role-switching method to handle changing workload characteristics. Experimental evaluations with popular LMMs show substantial gains in memory efficiency (up to 15$\times$ lower peak memory utilization), batch sizes (up to 22$\times$ larger), 10$\times$ more images per request, and 2.2$\times$ larger KV caches. Furthermore, it leads to significant improvements in SLO attainment (up to 90–100% improvement) and TTFT (up to 71% reduction), compared to systems that do not disaggregate. The code is available at https://github.com/vbdi/epdserve.

Cite

Text

Singh et al. "Efficiently Serving Large Multimodal Models Using EPD Disaggregation." Proceedings of the 42nd International Conference on Machine Learning, 2025.

Markdown

[Singh et al. "Efficiently Serving Large Multimodal Models Using EPD Disaggregation." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/singh2025icml-efficiently/)

BibTeX

@inproceedings{singh2025icml-efficiently,
  title     = {{Efficiently Serving Large Multimodal Models Using EPD Disaggregation}},
  author    = {Singh, Gursimran and Wang, Xinglu and Hu, Yifan and Yu, Timothy Tin Long and Xing, Linzi and Jiang, Wei and Wang, Zhefeng and Xiaolong, Bai and Li, Yi and Xiong, Ying and Zhang, Yong and Fan, Zhenan},
  booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
  year      = {2025},
  pages     = {55740-55756},
  volume    = {267},
  url       = {https://mlanthology.org/icml/2025/singh2025icml-efficiently/}
}