Wei, Xiuying

10 publications

AAAI 2025 AtomNet: Designing Tiny Models from Operators Under Extreme MCU Constraints Zhiwei Dong, Mingzhu Shen, Shihao Bai, Xiuying Wei, Jinyang Guo, Ruihao Gong, Song-Lu Chen, Xianglong Liu, Xu-Cheng Yin
ICLRW 2025 From Markov to Laplace: How Mamba In-Context Learns Markov Chains Marco Bondaschi, Nived Rajaraman, Xiuying Wei, Kannan Ramchandran, Razvan Pascanu, Caglar Gulcehre, Michael Gastpar, Ashok Vardhan Makkuva
NeurIPS 2025 RAT: Bridging RNN Efficiency and Attention Accuracy via Chunk-Based Sequence Modeling Xiuying Wei, Anunay Yadav, Razvan Pascanu, Caglar Gulcehre
NeurIPS 2024 Building on Efficient Foundations: Effective Training of LLMs with Structured Feedforward Layers Xiuying Wei, Skander Moalla, Razvan Pascanu, Caglar Gulcehre
AAAI 2024 Fast and Controllable Post-Training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes Ruihao Gong, Yang Yong, Zining Wang, Jinyang Guo, Xiuying Wei, Yuqing Ma, Xianglong Liu
ICLR 2024 QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models Jing Liu, Ruihao Gong, Xiuying Wei, Zhiwei Dong, Jianfei Cai, Bohan Zhuang
AAAI 2024 Selective Focus: Investigating Semantics Sensitivity in Post-Training Quantization for Lane Detection Yunqian Fan, Xiuying Wei, Ruihao Gong, Yuqing Ma, Xiangguo Zhang, Qi Zhang, Xianglong Liu
ICCV 2023 Lossy and Lossless (l2) Post-Training Model Size Compression Yumeng Shi, Shihao Bai, Xiuying Wei, Ruihao Gong, Jianlei Yang
NeurIPS 2022 Outlier Suppression: Pushing the Limit of Low-Bit Transformer Language Models Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Qi Zhang, Fengwei Yu, Xianglong Liu
ICLR 2022 QDrop: Randomly Dropping Quantization for Extremely Low-Bit Post-Training Quantization Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu