ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback

Abstract

Learning from human feedback has become a pivot technique in aligning large language models (LLMs) with human preferences. However, acquiring vast and premium human feedback is bottlenecked by time, labor, and human capability, resulting in small sizes or limited topics of current datasets. This further hinders feedback learning as well as alignment research within the open-source community. To address this issue, we explore how to go beyond human feedback and collect high-quality AI feedback automatically for a scalable alternative. Specifically, we identify scale and diversity as the key factors for feedback data to take effect. Accordingly, we first broaden instructions and responses in both amount and breadth to encompass a wider range of user-assistant interactions. Then, we meticulously apply a series of techniques to mitigate annotation biases for more reliable AI feedback. We finally present UltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset, which contains over 1 million GPT-4 feedback for 250k user-assistant conversations from various aspects. Built upon UltraFeedback, we align a LLaMA-based model by best-of-$n$ sampling and reinforcement learning, demonstrating its exceptional performance on chat benchmarks. Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models, serving as a solid foundation for future feedback learning research.

Cite

Text

Cui et al. "ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback." International Conference on Machine Learning, 2024.

Markdown

[Cui et al. "ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback." International Conference on Machine Learning, 2024.](https://mlanthology.org/icml/2024/cui2024icml-ultrafeedback/)

BibTeX

@inproceedings{cui2024icml-ultrafeedback,
  title     = {{ULTRAFEEDBACK: Boosting Language Models with Scaled AI Feedback}},
  author    = {Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and He, Bingxiang and Zhu, Wei and Ni, Yuan and Xie, Guotong and Xie, Ruobing and Lin, Yankai and Liu, Zhiyuan and Sun, Maosong},
  booktitle = {International Conference on Machine Learning},
  year      = {2024},
  pages     = {9722-9744},
  volume    = {235},
  url       = {https://mlanthology.org/icml/2024/cui2024icml-ultrafeedback/}
}