Direct Alignment with Heterogeneous Preferences

Abstract

Alignment with human preferences is commonly framed using a universal reward function, even though human preferences are inherently heterogeneous. We formalize this heterogeneity by introducing user types and examine the limits of the homogeneity assumption. We show that aligning to heterogeneous preferences with a single policy is best achieved using the average reward across user types. However, this requires additional information about annotators. We examine improvements under different information settings, focusing on direct alignment methods. We find that minimal information can yield first-order improvements, while full feedback from each user type leads to consistent learning of the optimal policy. Surprisingly, however, no sample-efficient consistent direct loss exists in this latter setting. These results reveal a fundamental tension between consistency and sample efficiency in direct policy alignment.

Cite

Text

Shirali et al. "Direct Alignment with Heterogeneous Preferences." Advances in Neural Information Processing Systems, 2025.

Markdown

[Shirali et al. "Direct Alignment with Heterogeneous Preferences." Advances in Neural Information Processing Systems, 2025.](https://mlanthology.org/neurips/2025/shirali2025neurips-direct/)

BibTeX

@inproceedings{shirali2025neurips-direct,
  title     = {{Direct Alignment with Heterogeneous Preferences}},
  author    = {Shirali, Ali and Nasr-Esfahany, Arash and Alomar, Abdullah Omar and Mirtaheri, Parsa and Abebe, Rediet and Procaccia, Ariel D.},
  booktitle = {Advances in Neural Information Processing Systems},
  year      = {2025},
  url       = {https://mlanthology.org/neurips/2025/shirali2025neurips-direct/}
}