Scaling Embedding Layers in Language Models

Abstract

We propose SCONE (**S**calable, **C**ontextualized, **O**ffloaded, **N**-gram **E**mbedding), a new method for extending input embedding layers to enhance language model performance. To avoid increased decoding costs, SCONE retains the original vocabulary while introducing embeddings for a set of frequent $n$-grams. These embeddings provide contextualized representation for each input token and are learned with a separate model during training. After training, embeddings are precomputed and stored in off-accelerator memory; during inference, querying them has minimal impact on latency due to the low complexity of embedding lookups. SCONE enables two new scaling strategies: increasing the number of $n$-gram embeddings and scaling the model used to learn them, both while maintaining fixed accelerator usage during inference (in terms of FLOPS and memory). We show that scaling both aspects enables a model with 1B accelerator-resident parameters to outperform a 1.9B-parameter baseline across diverse corpora, while using only about half the FLOPS and accelerator memory during inference.

Cite

Text

Yu et al. "Scaling Embedding Layers in Language Models." Advances in Neural Information Processing Systems, 2025.

Markdown

[Yu et al. "Scaling Embedding Layers in Language Models." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/yu2025neurips-scaling/)

BibTeX

@inproceedings{yu2025neurips-scaling,
  title     = {{Scaling Embedding Layers in Language Models}},
  author    = {Yu, Da and Cohen, Edith and Ghazi, Badih and Huang, Yangsibo and Kamath, Pritish and Kumar, Ravi and Liu, Daogao and Zhang, Chiyuan},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/yu2025neurips-scaling/}
}