Decoding-Time Realignment of Language Models
Abstract
Aligning language models with human preferences is crucial for reducing errors and biases in these models. Alignment techniques, such as reinforcement learning from human feedback (RLHF), are typically cast as optimizing a tradeoff between human preference rewards and a proximity regularization term that encourages staying close to the unaligned model. Selecting an appropriate level of regularization is critical: insufficient regularization can lead to reduced model capabilities due to reward hacking, whereas excessive regularization hinders alignment. Traditional methods for finding the optimal regularization level require retraining multiple models with varying regularization strengths. This process, however, is resource-intensive, especially for large models. To address this challenge, we propose decoding-time realignment (DeRa), a simple method to explore and evaluate different regularization strengths in aligned models without retraining. DeRa enables control over the degree of alignment, allowing users to smoothly transition between unaligned and aligned models. It also enhances the efficiency of hyperparameter tuning by enabling the identification of effective regularization strengths using a validation dataset.
Cite
Text
Liu et al. "Decoding-Time Realignment of Language Models." International Conference on Machine Learning, 2024.Markdown
[Liu et al. "Decoding-Time Realignment of Language Models." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/liu2024icml-decodingtime/)BibTeX
@inproceedings{liu2024icml-decodingtime,
title = {{Decoding-Time Realignment of Language Models}},
author = {Liu, Tianlin and Guo, Shangmin and Bianco, Leonardo and Calandriello, Daniele and Berthet, Quentin and Llinares-López, Felipe and Hoffmann, Jessica and Dixon, Lucas and Valko, Michal and Blondel, Mathieu},
booktitle = {International Conference on Machine Learning},
year = {2024},
pages = {31015-31031},
volume = {235},
url = {https://mlanthology.org/icml/2024/liu2024icml-decodingtime/}
}