Kim, Eunki

2 publications

ICML 2025 AlphaPO: Reward Shape Matters for LLM Alignment Aman Gupta, Shao Tang, Qingquan Song, Sirou Zhu, Jiwoo Hong, Ankan Saha, Viral Gupta, Noah Lee, Eunki Kim, Siyu Zhu, Parag Agrawal, Natesh S. Pillai, Sathiya Keerthi
ICML 2025 On the Robustness of Reward Models for Language Model Alignment Jiwoo Hong, Noah Lee, Eunki Kim, Guijin Son, Woojin Chung, Aman Gupta, Shao Tang, James Thorne