Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
Abstract
Direct Preference Optimization (DPO) and its variants are increasingly used for aligning language models with human preferences. Although these methods are designed to teach a model to generate preferred responses more frequently relative to dispreferred responses, prior work has observed that the likelihood of preferred responses often decreases during training. The current work sheds light on the causes and implications of this counter-intuitive phenomenon, which we term *likelihood displacement*. We demonstrate that likelihood displacement can be *catastrophic*, shifting probability mass from preferred responses to responses with an opposite meaning. As a simple example, training a model to prefer $\texttt{No}$ over $\texttt{Never}$ can sharply increase the probability of $\texttt{Yes}$. Moreover, when aligning the model to refuse unsafe prompts, we show that such displacement can *unintentionally lead to unalignment*, by shifting probability mass from preferred refusal responses to harmful responses (e.g., reducing the refusal rate of Llama-3-8B-Instruct from 74.4\% to 33.4\%). We theoretically characterize that likelihood displacement is driven by preferences that induce similar embeddings, as measured by a *centered hidden embedding similarity (CHES)* score. Empirically, the CHES score enables identifying which training samples contribute most to likelihood displacement in a given dataset. Filtering out these samples effectively mitigated unintentional unalignment in our experiments. More broadly, our results highlight the importance of curating data with sufficiently distinct preferences, for which we believe the CHES score may prove valuable.
Cite
Text
Razin et al. "Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization." NeurIPS 2024 Workshops: FITML, 2024.Markdown
[Razin et al. "Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization." NeurIPS 2024 Workshops: FITML, 2024.](https://mlanthology.org/neuripsw/2024/razin2024neuripsw-unintentional/)BibTeX
@inproceedings{razin2024neuripsw-unintentional,
title = {{Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization}},
author = {Razin, Noam and Malladi, Sadhika and Bhaskar, Adithya and Chen, Danqi and Arora, Sanjeev and Hanin, Boris},
booktitle = {NeurIPS 2024 Workshops: FITML},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/razin2024neuripsw-unintentional/}
}