Understanding LLM Embeddings for Regression
Abstract
With the rise of large language models (LLMs) for flexibly processing information as strings, a natural application is regression, specifically by preprocessing string representations into LLM embeddings as downstream features for metric prediction. In this paper, we provide one of the first comprehensive investigations into embedding-based regression and demonstrate that LLM embeddings as features can be better for high-dimensional regression tasks than using traditional feature engineering. This regression performance can be explained in part due to LLM embeddings over numeric data inherently preserving Lipschitz continuity over the feature space. Furthermore, we quantify the contribution of different model effects, most notably model size and language understanding, which we find surprisingly do not always improve regression performance.
Cite
Text
Tang et al. "Understanding LLM Embeddings for Regression." Transactions on Machine Learning Research, 2025.Markdown
[Tang et al. "Understanding LLM Embeddings for Regression." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/tang2025tmlr-understanding/)BibTeX
@article{tang2025tmlr-understanding,
title = {{Understanding LLM Embeddings for Regression}},
author = {Tang, Eric and Yang, Bangding and Song, Xingyou},
journal = {Transactions on Machine Learning Research},
year = {2025},
url = {https://mlanthology.org/tmlr/2025/tang2025tmlr-understanding/}
}