UniBias: Unveiling and Mitigating LLM Bias Through Internal Attention and FFN Manipulation

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in various tasks using the in-context learning (ICL) paradigm. However, their effectiveness is often compromised by inherent bias, leading to prompt brittleness—sensitivity to design settings such as example selection, order, and prompt formatting. Previous studies have addressed LLM bias through external adjustment of model outputs, but the internal mechanisms that lead to such bias remain unexplored. Our work delves into these mechanisms, particularly investigating how feedforward neural networks (FFNs) and attention heads result in the bias of LLMs. By Interpreting the contribution of individual FFN vectors and attention heads, we identify the biased LLM components that skew LLMs' prediction toward specific labels. To mitigate these biases, we introduce UniBias, an inference-only method that effectively identifies and eliminates biased FFN vectors and attention heads. Extensive experiments across 12 NLP datasets demonstrate that UniBias significantly enhances ICL performance and alleviates prompt brittleness of LLMs.

Cite

Text

Zhou et al. "UniBias: Unveiling and Mitigating LLM Bias Through Internal Attention and FFN Manipulation." Neural Information Processing Systems, 2024. doi:10.52202/079017-3244

Markdown

[Zhou et al. "UniBias: Unveiling and Mitigating LLM Bias Through Internal Attention and FFN Manipulation." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/zhou2024neurips-unibias/) doi:10.52202/079017-3244

BibTeX

@inproceedings{zhou2024neurips-unibias,
  title     = {{UniBias: Unveiling and Mitigating LLM Bias Through Internal Attention and FFN Manipulation}},
  author    = {Zhou, Hanzhang and Feng, Zijian and Zhu, Zixiao and Qian, Junlang and Mao, Kezhi},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3244},
  url       = {https://mlanthology.org/neurips/2024/zhou2024neurips-unibias/}
}