xVal: A Continuous Number Encoding for Large Language Models

Abstract

Large Language Models (LLMs) have not yet been broadly adapted for the analysis of scientific datasets due in part to the unique difficulties of tokenizing numbers. We propose xVal, a numerical encoding scheme that represents any real number using just a single token. xVal represents a given real number by scaling a dedicated embedding vector by the number value. Combined with a modified number-inference approach, this strategy renders the model end-to-end continuous when considered as a map from the numbers of the input string to those of the output string. This leads to an inductive bias that is generally more suitable for applications in scientific domains. We empirically evaluate our proposal on a number of synthetic and real-world datasets. Compared with existing number encoding schemes, we find that xVal is more token-efficient and demonstrates improved generalization.

Cite

Text

Golkar et al. "xVal: A Continuous Number Encoding for Large Language Models." NeurIPS 2023 Workshops: AI4Science, 2023.

Markdown

[Golkar et al. "xVal: A Continuous Number Encoding for Large Language Models." NeurIPS 2023 Workshops: AI4Science, 2023.](https://mlanthology.org/neuripsw/2023/golkar2023neuripsw-xval/)

BibTeX

@inproceedings{golkar2023neuripsw-xval,
  title     = {{xVal: A Continuous Number Encoding for Large Language Models}},
  author    = {Golkar, Siavash and Pettee, Mariel and Eickenberg, Michael and Bietti, Alberto and Cranmer, Miles and Krawezik, Geraud and Lanusse, Francois and McCabe, Michael and Ohana, Ruben and Parker, Liam Holden and Blancard, Bruno Régaldo-Saint and Tesileanu, Tiberiu and Cho, Kyunghyun and Ho, Shirley},
  booktitle = {NeurIPS 2023 Workshops: AI4Science},
  year      = {2023},
  url       = {https://mlanthology.org/neuripsw/2023/golkar2023neuripsw-xval/}
}