Mitigating Short Board Effect via Dynamic Reward Balancing in Multi-Reward LLM Optimization

Abstract

In the current landscape of large language models (LLMs), many evaluation metrics have been developed and used as rewards during training to improve specific metrics. However, balancing these metrics and dynamically adjusting reward weights remains challenging, as current approaches often fail to enhance weaker metrics. To address this, we empirically propose a Dynamic Reward Balancing Optimization framework (DRBO) to mitigate the "short-board effect" by measuring performance, adjusting reward weights to prioritize weaker metrics, and optimizing the model via reinforcement learning. We apply DRBO to both single-task and multi-type task scenarios, validating its effectiveness in generation with citations and online shopping conversation tasks. The results demonstrate improved overall performance and balanced optimization across multiple metrics, effectively overcoming the diversity and complexity inherent in LLMs.

Cite

Text

Chen et al. "Mitigating Short Board Effect via Dynamic Reward Balancing in Multi-Reward LLM Optimization." ICLR 2025 Workshops: SSI-FM, 2025.

Markdown

[Chen et al. "Mitigating Short Board Effect via Dynamic Reward Balancing in Multi-Reward LLM Optimization." ICLR 2025 Workshops: SSI-FM, 2025.](https://mlanthology.org/iclrw/2025/chen2025iclrw-mitigating/)

BibTeX

@inproceedings{chen2025iclrw-mitigating,
  title     = {{Mitigating Short Board Effect via Dynamic Reward Balancing in Multi-Reward LLM Optimization}},
  author    = {Chen, Nuo and Gao, Yufei and Jin, Yongnan and Hu, Yan and Gao, Anningzhe and Yan, Lingyong and Wang, Benyou},
  booktitle = {ICLR 2025 Workshops: SSI-FM},
  year      = {2025},
  url       = {https://mlanthology.org/iclrw/2025/chen2025iclrw-mitigating/}
}