Diao, Shizhe

17 publications

IJCAI 2025 Can We Verify Step by Step for Incorrect Answer Detection? Xin Xu, Shizhe Diao, Can Yang, Yang Wang
TMLR 2025 Entropy-Regularized Process Reward Model Hanning Zhang, Pengcheng Wang, Shizhe Diao, Yong Lin, Rui Pan, Hanze Dong, Dylan Zhang, Pavlo Molchanov, Tong Zhang
NeurIPS 2025 GPAS: Accelerating Convergence of LLM Pretraining via Gradient-Preserving Activation Scaling Tianhao Chen, Xin Xu, Zijing Liu, Pengxiang Li, Xinyuan Song, Ajay Kumar Jaiswal, Fan Zhang, Jishan Hu, Yang Wang, Hao Chen, Shizhe Diao, Shiwei Liu, Yu Li, Lu Yin, Can Yang
ICLR 2025 Hymba: A Hybrid-Head Architecture for Small Language Models Xin Dong, Yonggan Fu, Shizhe Diao, Wonmin Byeon, Zijia Chen, Ameya Sunil Mahabaleshwarkar, Shih-Yang Liu, Matthijs Van keirsbilck, Min-Hung Chen, Yoshi Suhara, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov
ICLR 2025 LongMamba: Enhancing Mamba's Long-Context Capabilities via Training-Free Receptive Field Enlargement Zhifan Ye, Kejing Xia, Yonggan Fu, Xin Dong, Jihoon Hong, Xiangchi Yuan, Shizhe Diao, Jan Kautz, Pavlo Molchanov, Yingyan Celine Lin
ICML 2025 MA-LoT: Model-Collaboration Lean-Based Long Chain-of-Thought Reasoning Enhances Formal Theorem Proving Ruida Wang, Rui Pan, Yuxin Li, Jipeng Zhang, Yizhen Jia, Shizhe Diao, Renjie Pi, Junjie Hu, Tong Zhang
NeurIPS 2025 Nemotron-CLIMB: Clustering-Based Iterative Data Mixture Bootstrapping for Language Model Pre-Training Shizhe Diao, Yu Yang, Yonggan Fu, Xin Dong, Dan Su, Markus Kliegl, Zijia Chen, Peter Belcak, Yoshi Suhara, Hongxu Yin, Mostofa Patwary, Yingyan Celine Lin, Jan Kautz, Pavlo Molchanov
NeurIPS 2025 Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models Yonggan Fu, Xin Dong, Shizhe Diao, Matthijs Van keirsbilck, Hanrong Ye, Wonmin Byeon, Yashaswi Karnati, Lucas Liebenwein, Maksim Khadkevich, Alexander Keller, Jan Kautz, Yingyan Celine Lin, Pavlo Molchanov
NeurIPS 2025 ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models Mingjie Liu, Shizhe Diao, Ximing Lu, Jian Hu, Xin Dong, Yejin Choi, Jan Kautz, Yi Dong
ICML 2025 UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning with Large Language Models Xin Xu, Qiyun Xu, Tong Xiao, Tianhao Chen, Yuchen Yan, Jiaxin Zhang, Shizhe Diao, Can Yang, Yang Wang
NeurIPS 2024 LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang
TMLR 2023 Black-Box Prompt Learning for Pre-Trained Language Models Shizhe Diao, Zhichao Huang, Ruijia Xu, Xuechun Li, Lin Yong, Xiao Zhou, Tong Zhang
TMLR 2023 RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, KaShun Shum, Tong Zhang
ICCV 2023 Towards Unifying Medical Vision-and-Language Pre-Training via Soft Prompts Zhihong Chen, Shizhe Diao, Benyou Wang, Guanbin Li, Xiang Wan
ICLR 2023 Write and Paint: Generative Vision-Language Models Are Unified Modal Learners Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, Jiawei Wang
ICML 2022 VLUE: A Multi-Task Multi-Dimension Benchmark for Evaluating Vision-Language Pre-Training Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang
NeurIPS 2021 Efficient Neural Network Training via Forward and Backward Propagation Sparsification Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe Diao, Tong Zhang