Li, Pingzhi

7 publications

NeurIPS 2025 Mozart: Modularized and Efficient MoE Training on 3.5d Wafer-Scale Chiplet Architectures Shuqing Luo, Ye Han, Pingzhi Li, Jiayin Qin, Jie Peng, Yang Katie Zhao, Yu Cao, Tianlong Chen
ICML 2025 Occult: Optimizing Collaborative Communications Across Experts for Accelerated Parallel MoE Training and Inference Shuqing Luo, Pingzhi Li, Jie Peng, Yang Zhao, Yu Cao, Yu Cheng, Tianlong Chen
ICLR 2025 PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches Rana Shahroz, Pingzhi Li, Sukwon Yun, Zhenyu Wang, Shahriar Nirjon, Chau-Wai Wong, Tianlong Chen
NeurIPS 2024 $\texttt{Model-GLUE}$: Democratized LLM Scaling for a Large Model Zoo in the Wild Xinyu Zhao, Guoheng Sun, Ruisi Cai, Yukun Zhou, Pingzhi Li, Peihao Wang, Bowen Tan, Yexiao He, Li Chen, Yi Liang, Beidi Chen, Binhang Yuan, Hongyi Wang, Ang Li, Zhangyang Wang, Tianlong Chen
ICLR 2024 Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy Pingzhi Li, Zhenyu Zhang, Prateek Yadav, Yi-Lin Sung, Yu Cheng, Mohit Bansal, Tianlong Chen
ICLRW 2024 Privacy-Preserving Fine-Tuning of Large Language Models Through Flatness Tiejin Chen, Longchao Da, Huixue Zhou, Pingzhi Li, Kaixiong Zhou, Tianlong Chen, Hua Wei
ICML 2024 Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D. Lee, Wotao Yin, Mingyi Hong, Zhangyang Wang, Sijia Liu, Tianlong Chen