Evaluating the Diversity and Quality of LLM Generated Content
Abstract
Recent work suggests that preference-tuning techniques—including Reinforcement Learning from Human Preferences (RLHF) methods like PPO and GRPO, as well as alternatives like DPO—reduce diversity, creating a dilemma given that such models are widely deployed in applications requiring diverse outputs. To address this, we introduce a framework for measuring effective semantic diversity—diversity among outputs that meet quality thresholds—which better reflects the practical utility of large language models (LLMs). Using open-ended tasks that require no human intervention, we find counterintuitive results: although preference-tuned models—especially those trained via RL—exhibit reduced lexical and syntactic diversity, they produce greater effective semantic diversity than SFT or base models, not from increasing diversity among high-quality outputs, but from generating more high-quality outputs overall. We discover that preference tuning reduces syntactic diversity while preserving semantic diversity—revealing a distinction between diversity in form and diversity in content that traditional metrics often overlook. Our analysis further shows that smaller models are consistently more parameter-efficient at generating unique content within a fixed sampling budget, offering insights into the relationship between model scaling and diversity. These findings have important implications for applications that require diverse yet high-quality outputs, from creative assistance to synthetic data generation.
Cite
Text
Shypula et al. "Evaluating the Diversity and Quality of LLM Generated Content." ICLR 2025 Workshops: DL4C, 2025.Markdown
[Shypula et al. "Evaluating the Diversity and Quality of LLM Generated Content." ICLR 2025 Workshops: DL4C, 2025.](https://mlanthology.org/iclrw/2025/shypula2025iclrw-evaluating/)BibTeX
@inproceedings{shypula2025iclrw-evaluating,
title = {{Evaluating the Diversity and Quality of LLM Generated Content}},
author = {Shypula, Alexander and Li, Shuo and Zhang, Botong and Padmakumar, Vishakh and Yin, Kayo and Bastani, Osbert},
booktitle = {ICLR 2025 Workshops: DL4C},
year = {2025},
url = {https://mlanthology.org/iclrw/2025/shypula2025iclrw-evaluating/}
}