Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution
Abstract
Probing learned concepts in large language models (LLMs) is crucial for understanding how semantic knowledge is encoded internally. Training linear classifiers on probing tasks is a principle approach to denote the vector of a certain concept in the representation space. However, the single vector identified for a concept varies with both data and training, making it less robust and weakening its effectiveness in real-world applications. To address this challenge, we propose an approach to approximate the subspace representing a specific concept. Built on linear probing classifiers, we extend the concept vectors into Gaussian Concept Subspace (GCS). We demonstrate GCS's effectiveness through measuring its faithfulness and plausibility across multiple LLMs with different sizes and architectures. Additionally, we use representation intervention tasks to showcase its efficacy in real-world applications such as emotion steering. Experimental results indicate that GCS concept vectors have the potential to balance steering performance and maintaining the fluency in natural language generation tasks.
Cite
Text
Zhao et al. "Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution." International Conference on Learning Representations, 2025.Markdown
[Zhao et al. "Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution." International Conference on Learning Representations, 2025.](https://mlanthology.org/iclr/2025/zhao2025iclr-beyond/)BibTeX
@inproceedings{zhao2025iclr-beyond,
title = {{Beyond Single Concept Vector: Modeling Concept Subspace in LLMs with Gaussian Distribution}},
author = {Zhao, Haiyan and Zhao, Heng and Shen, Bo and Payani, Ali and Yang, Fan and Du, Mengnan},
booktitle = {International Conference on Learning Representations},
year = {2025},
url = {https://mlanthology.org/iclr/2025/zhao2025iclr-beyond/}
}