EmbedLLM: Learning Compact Representations of Large Language Models
Abstract
With hundreds of thousands of language models available on Huggingface today, efficiently evaluating and utilizing these models across various downstream tasks has become increasingly critical. Many existing methods repeatedly learn task-specific representations of Large Language Models (LLMs), which leads to inefficiencies in both time and computational resources. To address this, we propose EmbedLLM, a framework designed to learn compact vector representations of LLMs that facilitate downstream applications involving many models, such as model routing. We introduce an encoder-decoder approach for learning such embedding, along with a systematic framework to evaluate their effectiveness. Empirical results show that EmbedLLM outperforms prior methods in model routing. Additionally, we demonstrate that our method can forecast a model's performance on multiple benchmarks, without incurring additional inference cost. Extensive probing experiments validate that the learned embeddings capture key model characteristics, e.g. whether the model is specialized for coding tasks, even without being explicitly trained on them. We open source our dataset, code and embedder to facilitate further research and application.
Cite
Text
Zhuang et al. "EmbedLLM: Learning Compact Representations of Large Language Models." International Conference on Learning Representations, 2025.Markdown
[Zhuang et al. "EmbedLLM: Learning Compact Representations of Large Language Models." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhuang2025iclr-embedllm/)BibTeX
@inproceedings{zhuang2025iclr-embedllm,
title = {{EmbedLLM: Learning Compact Representations of Large Language Models}},
author = {Zhuang, Richard and Wu, Tianhao and Wen, Zhaojin and Li, Andrew and Jiao, Jiantao and Ramchandran, Kannan},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhuang2025iclr-embedllm/}
}