Accelerating Direct Preference Optimization with Prefix Sharing
Abstract
Offline paired preference optimization algorithms have become a popular approach for fine-tuning on preference data, outperforming traditional supervised fine-tuning in various tasks. However, traditional implementations often involve redundant computations, especially for tasks with long shared prompts. We introduce prefix sharing for preference tuning, a novel technique that processes chosen and rejected responses as one sequence with a shared prefix. To prevent cross-response contamination, we use a custom block-sparse attention mask. Our method achieves $1.1$-$1.5\times$ improvement in training throughput on popular DPO datasets, without any effect on convergence. When combined with sequence packing, we observe consistent $1.3$-$1.6\times$ speedups, benefiting even datasets with smaller sequence lengths. While we focus on Direct Preference Optimization (DPO), our approach is applicable to other paired preference tuning methods. By enhancing computational efficiency, our work contributes to making preference-based fine-tuning more accessible for a wider range of applications and model sizes.
Cite
Text
Wang and Hegde. "Accelerating Direct Preference Optimization with Prefix Sharing." NeurIPS 2024 Workshops: FITML, 2024.Markdown
[Wang and Hegde. "Accelerating Direct Preference Optimization with Prefix Sharing." NeurIPS 2024 Workshops: FITML, 2024.](https://mlanthology.org/neuripsw/2024/wang2024neuripsw-accelerating/)BibTeX
@inproceedings{wang2024neuripsw-accelerating,
title = {{Accelerating Direct Preference Optimization with Prefix Sharing}},
author = {Wang, Franklin and Hegde, Sumanth},
booktitle = {NeurIPS 2024 Workshops: FITML},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/wang2024neuripsw-accelerating/}
}