Spectral Editing of Activations for Large Language Model Alignment

Abstract

Large language models (LLMs) often exhibit undesirable behaviours, such as generating untruthful or biased content. Editing their internal representations has been shown to be effective in mitigating such behaviours on top of the existing alignment methods. We propose a novel inference-time editing method, namely spectral editing of activations (SEA), to project the input representations into directions with maximal covariance with the positive demonstrations (e.g., truthful) while minimising covariance with the negative demonstrations (e.g., hallucinated). We also extend our method to non-linear editing using feature functions. We run extensive experiments on benchmarks concerning truthfulness and bias with six open-source LLMs of different sizes and model families. The results demonstrate the superiority of SEA in effectiveness, generalisation to similar tasks, as well as computation and data efficiency. We also show that SEA editing only has a limited negative impact on other model capabilities.

Cite

Text

Qiu et al. "Spectral Editing of Activations for Large Language Model Alignment." Neural Information Processing Systems, 2024. doi:10.52202/079017-1815

Markdown

[Qiu et al. "Spectral Editing of Activations for Large Language Model Alignment." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/qiu2024neurips-spectral/) doi:10.52202/079017-1815

BibTeX

@inproceedings{qiu2024neurips-spectral,
  title     = {{Spectral Editing of Activations for Large Language Model Alignment}},
  author    = {Qiu, Yifu and Zhao, Zheng and Ziser, Yftah and Korhonen, Anna and Ponti, Edoardo M. and Cohen, Shay B.},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1815},
  url       = {https://mlanthology.org/neurips/2024/qiu2024neurips-spectral/}
}