Reducing the Probability of Undesirable Outputs in Language Models Using Probabilistic Inference

Abstract

Reinforcement learning (RL) has become a predominant technique to align language models (LMs) with human preferences or promote outputs which are deemed to be desirable by a given reward function. Standard RL approaches optimize average reward, while methods explicitly focused on reducing the probability of undesired outputs typically come at a cost to average-case performance. To improve this tradeoff, we introduce RePULSe, a new training method that augments the standard RL loss with an additional loss that uses learned proposals to guide sampling low-reward outputs, and then reduces those outputs’ probability. We run experiments demonstrating that RePULSe produces a better tradeoff of expected reward versus the probability of undesired outputs and is more adversarially robust, compared to standard RL alignment approaches and alternatives.

Cite

Text

Zhao et al. "Reducing the Probability of Undesirable Outputs in Language Models Using Probabilistic Inference." Advances in Neural Information Processing Systems, 2025.

Markdown

[Zhao et al. "Reducing the Probability of Undesirable Outputs in Language Models Using Probabilistic Inference." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/zhao2025neurips-reducing/)

BibTeX

@inproceedings{zhao2025neurips-reducing,
  title     = {{Reducing the Probability of Undesirable Outputs in Language Models Using Probabilistic Inference}},
  author    = {Zhao, Stephen and Li, Aidan and Brekelmans, Rob and Grosse, Roger Baker},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/zhao2025neurips-reducing/}
}