Zhao, Ritchie

5 publications

ICML 2025 RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression Payman Behnam, Yaosheng Fu, Ritchie Zhao, Po-An Tsai, Zhiding Yu, Alexey Tumanov
ICLR 2020 Precision Gating: Improving Neural Network Efficiency with Dynamic Dual-Precision Activations Yichi Zhang, Ritchie Zhao, Weizhe Hua, Nayun Xu, G. Edward Suh, Zhiru Zhang
NeurIPS 2020 Pushing the Limits of Narrow Precision Inferencing at Cloud Scale with Microsoft Floating Point Bita Darvish Rouhani, Daniel Lo, Ritchie Zhao, Ming Liu, Jeremy Fowers, Kalin Ovtcharov, Anna Vinogradsky, Sarah Massengill, Lita Yang, Ray Bittner, Alessandro Forin, Haishan Zhu, Taesik Na, Prerak Patel, Shuai Che, Lok Chand Koppaka, Xia Song, Subhojit Som, Kaustav Das, Saurabh T, Steve Reinhardt, Sitaram Lanka, Eric Chung, Doug Burger
ICML 2019 Improving Neural Network Quantization Without Retraining Using Outlier Channel Splitting Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Chris De Sa, Zhiru Zhang
CVPRW 2017 Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration Jeng-Hau Lin, Tianwei Xing, Ritchie Zhao, Zhiru Zhang, Mani B. Srivastava, Zhuowen Tu, Rajesh K. Gupta