Li, Jiaxiang

9 publications

ICLR 2025 Joint Reward and Policy Learning with Demonstrations and Human Feedback Improves Alignment Chenliang Li, Siliang Zeng, Zeyi Liao, Jiaxiang Li, Dongyeop Kang, Alfredo Garcia, Mingyi Hong
ICLRW 2025 Reinforcement Learning in Inference Time: A Perspective from Successive Policy Iterations Xinnan Zhang, Chenliang Li, Siliang Zeng, Jiaxiang Li, Zhongruo Wang, Songtao Lu, Alfredo Garcia, Mingyi Hong
JMLR 2025 Riemannian Bilevel Optimization Jiaxiang Li, Shiqian Ma
NeurIPS 2024 Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment Jiaxiang Li, Siliang Zeng, Hoi-To Wai, Chenliang Li, Alfredo Garcia, Mingyi Hong
ICMLW 2024 Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment Jiaxiang Li, Siliang Zeng, Hoi To Wai, Chenliang Li, Alfredo Garcia, Mingyi Hong
NeurIPSW 2024 LLM Alignment Through Successive Policy Re-Weighting (SPR) Xinnan Zhang, Siliang Zeng, Jiaxiang Li, Kaixiang Lin, Mingyi Hong
NeurIPSW 2024 Learning Reward and Policy Jointly from Demonstration and Preference Improves Alignment Chenliang Li, Siliang Zeng, Zeyi Liao, Jiaxiang Li, Dongyeop Kang, Alfredo Garcia, Mingyi Hong
ICML 2024 Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen
NeurIPS 2024 SLTrain: A Sparse Plus Low Rank Approach for Parameter and Memory Efficient Pretraining Andi Han, Jiaxiang Li, Wei Huang, Mingyi Hong, Akiko Takeda, Pratik Jawanpuria, Bamdev Mishra