One Goal, Many Challenges: Robust Preference Optimization amid Content-Aware and Multi-Source Noise
Abstract
Large Language Models (LLMs) have made significant strides in generating human-like responses, largely due to preference alignment techniques. However, these methods often assume unbiased human feedback, which is rarely the case in real-world scenarios. This paper introduces Content-Aware Noise-Resilient Preference Optimization (CNRPO), a novel framework that addresses multiple sources of content-dependent noise in preference learning. CNRPO employs a multi-objective optimization approach to separate true preferences from content-aware noises, effectively mitigating their impact. We leverage backdoor attack mechanisms to efficiently learn and control various noise sources within a single model. Theoretical analysis and extensive experiments on different synthetic noisy datasets demonstrate that CNRPO significantly improves alignment with primary human preferences while controlling for secondary noises and biases, such as response length and harmfulness.
Cite
Text
Afzali et al. "One Goal, Many Challenges: Robust Preference Optimization amid Content-Aware and Multi-Source Noise." ICLR 2025 Workshops: SCSL, 2025.Markdown
[Afzali et al. "One Goal, Many Challenges: Robust Preference Optimization amid Content-Aware and Multi-Source Noise." ICLR 2025 Workshops: SCSL, 2025.](https://mlanthology.org/iclrw/2025/afzali2025iclrw-one/)BibTeX
@inproceedings{afzali2025iclrw-one,
title = {{One Goal, Many Challenges: Robust Preference Optimization amid Content-Aware and Multi-Source Noise}},
author = {Afzali, Amirabbas and Afsharrad, Amirhossein and Mousavi, Seyed Shahabeddin and Lall, Sanjay},
booktitle = {ICLR 2025 Workshops: SCSL},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/afzali2025iclrw-one/}
}