MobileLLM: Optimizing Sub-Billion Parameter Language Models for On-Device Use Cases
Abstract
This paper addresses the growing need for efficient large language models (LLMs) on mobile devices, driven by increasing cloud costs and latency concerns. We focus on designing top-quality LLMs with fewer than a billion parameters, a practical choice for mobile deployment. Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs. Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models. Additionally, we propose an immediate block-wise weight-sharing approach with no increase in model size and only marginal latency overhead. The resultant models, denoted as MobileLLM-LS, demonstrate a further accuracy enhancement of 0.7%/0.8% than MobileLLM 125M/350M. Moreover, MobileLLM model family shows significant improvements compared to previous sub-billion models on chat benchmarks, and demonstrates close correctness to LLaMA-v2 7B in API calling tasks, highlighting the capability of small models for common on-device use cases.
Cite
Text
Liu et al. "MobileLLM: Optimizing Sub-Billion Parameter Language Models for On-Device Use Cases." International Conference on Machine Learning, 2024.Markdown
[Liu et al. "MobileLLM: Optimizing Sub-Billion Parameter Language Models for On-Device Use Cases." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/liu2024icml-mobilellm/)BibTeX
@inproceedings{liu2024icml-mobilellm,
title = {{MobileLLM: Optimizing Sub-Billion Parameter Language Models for On-Device Use Cases}},
author = {Liu, Zechun and Zhao, Changsheng and Iandola, Forrest and Lai, Chen and Tian, Yuandong and Fedorov, Igor and Xiong, Yunyang and Chang, Ernie and Shi, Yangyang and Krishnamoorthi, Raghuraman and Lai, Liangzhen and Chandra, Vikas},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {32431-32454},
volume = {235},
url = {https://mlanthology.org/icml/2024/liu2024icml-mobilellm/}
}