A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity

Abstract

While alignment algorithms are commonly used to tune pre-trained language models towards user preferences, we lack explanations for the underlying mechanisms in which models become “aligned”, thus making it difficult to explain phenomena like jailbreaks. In this work we study a popular algorithm, direct preference optimization (DPO), and the mechanisms by which it reduces toxicity. Namely, we first study how toxicity is represented and elicited in pre-trained language models (GPT2-medium, Llama2-7b). We then apply DPO with a carefully crafted pairwise dataset to reduce toxicity. We examine how the resulting models avert toxic outputs, and find that capabilities learned from pre-training are not removed, but rather bypassed. We use this insight to demonstrate a simple method to un-align the models, reverting them back to their toxic behavior.

Cite

Text

Lee et al. "A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity." International Conference on Machine Learning, 2024.

Markdown

[Lee et al. "A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/lee2024icml-mechanistic/)

BibTeX

@inproceedings{lee2024icml-mechanistic,
  title     = {{A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity}},
  author    = {Lee, Andrew and Bai, Xiaoyan and Pres, Itamar and Wattenberg, Martin and Kummerfeld, Jonathan K. and Mihalcea, Rada},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {26361-26378},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/lee2024icml-mechanistic/}
}