Chen, Congliang

9 publications

ICLR 2025 Adam-Mini: Use Fewer Learning Rates to Gain More Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Diederik P Kingma, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun
ICLR 2025 Preserving Diversity in Supervised Fine-Tuning of Large Language Models Ziniu Li, Congliang Chen, Tian Xu, Zeyu Qin, Jiancong Xiao, Zhi-Quan Luo, Ruoyu Sun
ICMLW 2024 Adam-Mini: Use Fewer Learning Rates to Gain More Yushun Zhang, Congliang Chen, Ziniu Li, Tian Ding, Chenwei Wu, Yinyu Ye, Zhi-Quan Luo, Ruoyu Sun
NeurIPSW 2024 Entropic Distribution Matching for Supervised Fine-Tuning of LLMs: Less Overfitting and Better Diversity Ziniu Li, Congliang Chen, Tian Xu, Zeyu Qin, Jiancong Xiao, Ruoyu Sun, Zhi-Quan Luo
NeurIPS 2024 Why Transformers Need Adam: A Hessian Perspective Yushun Zhang, Congliang Chen, Tian Ding, Ziniu Li, Ruoyu Sun, Zhi-Quan Luo
ICMLW 2024 Why Transformers Need Adam: A Hessian Perspective Yushun Zhang, Congliang Chen, Tian Ding, Ziniu Li, Ruoyu Sun, Zhi-Quan Luo
NeurIPS 2022 Adam Can Converge Without Any Modification on Update Rules Yushun Zhang, Congliang Chen, Naichen Shi, Ruoyu Sun, Zhi-Quan Luo
JMLR 2022 Towards Practical Adam: Non-Convexity, Convergence Theory, and Mini-Batch Acceleration Congliang Chen, Li Shen, Fangyu Zou, Wei Liu
AISTATS 2021 Communication Efficient Primal-Dual Algorithm for Nonconvex Nonsmooth Distributed Optimization Congliang Chen, Jiawei Zhang, Li Shen, Peilin Zhao, Zhiquan Luo