Wu, Xiaoxia

15 publications

AAAI 2024 DeepSpeed Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing Conglong Li, Zhewei Yao, Xiaoxia Wu, Minjia Zhang, Connor Holmes, Cheng Li, Yuxiong He
AAAI 2024 Exploring Post-Training Quantization in LLMs from Comprehensive Study to Low Rank Compensation Zhewei Yao, Xiaoxia Wu, Cheng Li, Stephen Youn, Yuxiong He
NeurIPS 2024 Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang
ICLR 2024 ZeRO++: Extremely Efficient Collective Communication for Large Model Training Guanhua Wang, Heyang Qin, Sam Ade Jacobs, Xiaoxia Wu, Connor Holmes, Zhewei Yao, Samyam Rajbhandari, Olatunji Ruwase, Feng Yan, Lei Yang, Yuxiong He
NeurIPSW 2023 DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies Shuaiwen Leon Song, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, Xiaoxia Wu, Mohammed AlQuraishi, Gustaf Ahdritz, Christina Floristean, Rick L. Stevens, Venkatram Vishwanath, Arvind Ramanathan, Sam Foreman, Kyle Hippe, Prasanna Balaprakash, Yuxiong He
ICML 2023 Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases Xiaoxia Wu, Cheng Li, Reza Yazdani Aminabadi, Zhewei Yao, Yuxiong He
AAAI 2022 AdaLoss: A Computationally-Efficient and Provably Convergent Adaptive Gradient Method Xiaoxia Wu, Yuege Xie, Simon Shaolei Du, Rachel A. Ward
NeurIPS 2022 XTC: Extreme Compression for Pre-Trained Transformers Made Simple and Efficient Xiaoxia Wu, Zhewei Yao, Minjia Zhang, Conglong Li, Yuxiong He
NeurIPS 2022 ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, Yuxiong He
ICLR 2021 When Do Curricula Work? Xiaoxia Wu, Ethan Dyer, Behnam Neyshabur
JMLR 2020 AdaGrad Stepsizes: Sharp Convergence over Nonconvex Landscapes Rachel Ward, Xiaoxia Wu, Leon Bottou
AISTATS 2020 Choosing the Sample with Lowest Loss Makes SGD Robust Vatsal Shah, Xiaoxia Wu, Sujay Sanghavi
NeurIPS 2020 Implicit Regularization and Convergence for Weight Normalization Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu
AISTATS 2020 Linear Convergence of Adaptive Stochastic Gradient Descent Yuege Xie, Xiaoxia Wu, Rachel Ward
ICML 2019 AdaGrad Stepsizes: Sharp Convergence over Nonconvex Landscapes Rachel Ward, Xiaoxia Wu, Leon Bottou