He, Yuxiong

19 publications

ICLR 2025 ConvCodeWorld: Benchmarking Conversational Code Generation in Reproducible Feedback Environments Hojae Han, Seung-Won Hwang, Rajhans Samdani, Yuxiong He
AAAI 2024 DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Connor Holmes, Cheng Li, Yuxiong He
AAAI 2024 Exploring Post-Training Quantization in LLMs from Comprehensive Study to Low Rank Compensation Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, Yuxiong He
ICLR 2024 ZeRO++: Extremely Efficient Collective Communication for Large Model Training Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Xiaoxia Wu, Connor Holmes, Zhewei Yao, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He
NeurIPSW 2023 DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Rick L. Stevens, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Prasanna Balaprakash, Yuxiong He
ICLR 2023 DySR: Adaptive Super-Resolution via Algorithm and System Co-Design Syed Zawad, Cheng Li, Zhewei Yao, Elton Zheng, Yuxiong He, Feng Yan
ICLR 2023 Maximizing Communication Efficiency for Large-Scale Training via 0/1 Adam Yucheng Lu, Conglong Li, Minjia Zhang, Christopher De Sa, Yuxiong He
ICML 2023 Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He
AAAI 2022 Adversarial Data Augmentation for Task-Specific Knowledge Distillation of Pre-Trained Transformers Minjia Zhang, Uma-Naresh Niranjan, Yuxiong He
ICML 2022 DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, Yuxiong He
NeurIPS 2022 The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models Conglong Li, Minjia Zhang, Yuxiong He
NeurIPS 2022 XTC: Extreme Compression for Pre-Trained Transformers Made Simple and Efficient Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He
NeurIPS 2022 ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He
ICML 2021 1-Bit Adam: Communication Efficient Large-Scale Training with Adam’s Convergence Speed Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He
NeurIPS 2021 NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM Connor Holmes, Minjia Zhang, Yuxiong He, Bo Wu
NeurIPS 2021 SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training Using Gradient Similarity Measurement Heyang Qin, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He
NeurIPS 2020 Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping Minjia Zhang, Yuxiong He
ICLR 2018 Learning Intrinsic Sparse Structures Within Long Short-Term Memory Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, Hai Li
NeurIPS 2018 Navigating with Graph Representations for Fast and Scalable Decoding of Neural Language Models Minjia Zhang, Wenhan Wang, Xiaodong Liu, Jianfeng Gao, Yuxiong He