Liu, Shiwei

57 publications

NeurIPS 2025 AlphaDecay: Module-Wise Weight Decay for Heavy-Tailed Balancing in LLMs Di He, Songjun Tu, Ajay Jaiswal, Li Shen, Ganzhao Yuan, Shiwei Liu, Lu Yin
ICLR 2025 Composable Interventions for Language Models Arinbjörn Kolbeinsson, Kyle O'Brien, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Jayant Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen
ICML 2025 From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories, and Applications Ajay Kumar Jaiswal, Yifan Wang, Lu Yin, Shiwei Liu, Runjin Chen, Jiawei Zhao, Ananth Grama, Yuandong Tian, Zhangyang Wang
NeurIPS 2025 GPAS: Accelerating Convergence of LLM Pretraining via Gradient-Preserving Activation Scaling Tianhao Chen, Xin Xu, Zijing Liu, Pengxiang Li, Xinyuan Song, Ajay Kumar Jaiswal, Fan Zhang, Jishan Hu, Yang Wang, Hao Chen, Shizhe Diao, Shiwei Liu, Yu Li, Lu Yin, Can Yang
ICML 2025 LIFT the Veil for the Truth: Principal Weights Emerge After Rank Reduction for Reasoning-Focused Supervised Fine-Tuning Zihang Liu, Tianyu Pang, Oleg Balabanov, Chaoqun Yang, Tianjin Huang, Lu Yin, Yaoqing Yang, Shiwei Liu
ICML 2025 Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More Xialie Zhuang, Zhikai Jia, Jianjin Li, Zhenyu Zhang, Li Shen, Zheng Cao, Shiwei Liu
ICLR 2025 Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN Pengxiang Li, Lu Yin, Shiwei Liu
CPAL 2025 Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients Zhenyu Zhang, Ajay Kumar Jaiswal, Lu Yin, Shiwei Liu, Jiawei Zhao, Yuandong Tian, Zhangyang Wang
AAAI 2025 SIDE: Socially Informed Drought Estimation Toward Understanding Societal Impact Dynamics of Environmental Crisis Lanyu Shang, Bozhang Chen, Shiwei Liu, Yang Zhang, Ruohan Zong, Anav Vora, Ximing Cai, Na Wei, Dong Wang
ICLR 2025 SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu
ICLRW 2025 Spam: Spike-Aware Adam with Momentum Reset for Stable LLM Training Tianjin Huang, Ziquan Zhu, Gaojie Jin, Lu Liu, Zhangyang Wang, Shiwei Liu
ICLRW 2025 Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam Tianjin Huang, Haotian Hu, Zhenyu Zhang, Gaojie Jin, Xiang Li, Li Shen, Tianlong Chen, Lu Liu, Qingsong Wen, Zhangyang Wang, Shiwei Liu
NeurIPS 2025 The Curse of Depth in Large Language Models Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin, Yefeng Zheng, Shiwei Liu
ICLRW 2025 The Curse of Depth in Large Language Models Wenfang Sun, Xinyuan Song, Pengxiang Li, Lu Yin, Yefeng Zheng, Shiwei Liu
AAAI 2025 Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen
ICLR 2024 AdaMerging: Adaptive Model Merging for Multi-Task Learning Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guibing Guo, Xingwei Wang, Dacheng Tao
ICML 2024 Advancing Dynamic Sparse Training by Exploring Optimization Opportunities Jie Ji, Gen Li, Lu Yin, Minghai Qin, Geng Yuan, Linke Guo, Shiwei Liu, Xiaolong Ma
NeurIPS 2024 AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-Wise Pruning of Large Language Models Haiquan Lu, Yefan Zhou, Shiwei Liu, Zhangyang Wang, Michael W. Mahoney, Yaoqing Yang
ICML 2024 CaM: Cache Merging for Memory-Efficient LLMs Inference Yuxin Zhang, Yuxuan Du, Gen Luo, Yunshan Zhong, Zhenyu Zhang, Shiwei Liu, Rongrong Ji
ICLRW 2024 Composing Knowledge and Compression Interventions for Language Models Arinbjörn Kolbeinsson, Tianjin Huang, Shanghua Gao, Shiwei Liu, Jonathan Richard Schwarz, Anurag Jayant Vaidya, Faisal Mahmood, Marinka Zitnik, Tianlong Chen, Thomas Hartvigsen
ICLR 2024 Dynamic Sparse No Training: Training-Free Fine-Tuning for Sparse LLMs Yuxin Zhang, Lirui Zhao, Mingbao Lin, Sun Yunyun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, Rongrong Ji
NeurIPS 2024 E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation Boqian Wu, Qiao Xiao, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Decebal Constantin Mocanu, Maurice van Keulen, Elena Mocanu
NeurIPS 2024 Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang
CPAL 2024 HRBP: Hardware-Friendly Regrouping Towards Block-Based Pruning for Sparse CNN Training Haoyu Ma, Chengming Zhang, Lizhi Xiang, Xiaolong Ma, Geng Yuan, Wenkai Zhang, Shiwei Liu, Tianlong Chen, Dingwen Tao, Yanzhi Wang, Zhangyang Wang, Xiaohui Xie
ICML 2024 Junk DNA Hypothesis: Pruning Small Pre-Trained Weights $\textit{Irreversibly}$ and $\textit{Monotonically}$ Impairs “Difficult" Downstream Tasks in LLMs Lu Yin, Ajay Kumar Jaiswal, Shiwei Liu, Souvik Kundu, Zhangyang Wang
ICLR 2024 NeurRev: Train Better Sparse Neural Network Practically via Neuron Revitalization Gen Li, Lu Yin, Jie Ji, Wei Niu, Minghai Qin, Bin Ren, Linke Guo, Shiwei Liu, Xiaolong Ma
ICML 2024 Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu
ICLRW 2024 Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu
ICML 2024 Sparse Cocktail: Every Sparse Pattern Every Sparse Ratio All at Once Zhangheng Li, Shiwei Liu, Tianlong Chen, Ajay Kumar Jaiswal, Zhenyu Zhang, Dilin Wang, Raghuraman Krishnamoorthi, Shiyu Chang, Zhangyang Wang
ICML 2023 Are Large Kernels Better Teachers than Transformers for ConvNets? Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu
ICCV 2023 Data Augmented Flatness-Aware Gradient Projection for Continual Learning Enneng Yang, Li Shen, Zhenyi Wang, Shiwei Liu, Guibing Guo, Xingwei Wang
NeurIPS 2023 Don’t Just Prune by Magnitude! Your Mask Topology Is a Secret Weapon Duc Hoang, Souvik Kundu, Shiwei Liu, Zhangyang "Atlas" Wang
NeurIPS 2023 Dynamic Sparsity Is Channel-Level Sparsity Learner Lu Yin, Gen Li, Meng Fang, Li Shen, Tianjin Huang, Zhangyang "Atlas" Wang, Vlado Menkovski, Xiaolong Ma, Mykola Pechenizkiy, Shiwei Liu
ECML-PKDD 2023 Enhancing Adversarial Training via Reweighting Optimization Trajectory Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlado Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy
ICML 2023 Graph Ladling: Shockingly Simple Parallel GNN Training Without Intermediate Communication Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang
ICML 2023 Instant Soup: Cheap Pruning Ensembles in a Single Pass Can Draw Lottery Tickets from Large Models Ajay Kumar Jaiswal, Shiwei Liu, Tianlong Chen, Ying Ding, Zhangyang Wang
AAAI 2023 Lottery Pools: Winning More by Interpolating Tickets Without Increasing Training or Inference Cost Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy
CVPRW 2023 Many-Task Federated Learning: A New Problem Setting and a Simple Baseline Ruisi Cai, Xiaohan Chen, Shiwei Liu, Jayanth Srinivasa, Myungjin Lee, Ramana Kompella, Zhangyang Wang
ICLR 2023 More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 Using Sparsity Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang
NeurIPS 2023 Predicting Mutational Effects on Protein-Protein Binding via a Side-Chain Diffusion Probabilistic Model Shiwei Liu, Tian Zhu, Milong Ren, Chungong Yu, Dongbo Bu, Haicang Zhang
ECML-PKDD 2023 REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training Jiaxu Zhao, Lu Yin, Shiwei Liu, Meng Fang, Mykola Pechenizkiy
ICLR 2023 Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph Duc N.M Hoang, Shiwei Liu, Radu Marculescu, Zhangyang Wang
ICLR 2023 Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers Tianlong Chen, Zhenyu Zhang, Ajay Kumar Jaiswal, Shiwei Liu, Zhangyang Wang
ICLR 2023 Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, Ajay Kumar Jaiswal, Zhangyang Wang
TMLR 2023 Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
NeurIPS 2023 The Emergence of Essential Sparsity in Large Pre-Trained Models: The Weights That Matter Ajay Jaiswal, Shiwei Liu, Tianlong Chen, Zhangyang "Atlas" Wang
NeurIPS 2023 Towards Data-Agnostic Pruning at Initialization: What Makes a Good Sparse Mask? Hoang Pham, The Anh Ta, Shiwei Liu, Lichuan Xiang, Dung Le, Hongkai Wen, Long Tran-Thanh
MLJ 2022 A Brain-Inspired Algorithm for Training Highly Sparse Neural Networks Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond N. J. Veldhuis, Mykola Pechenizkiy
ICLR 2022 Deep Ensembling with No Overhead for Either Training or Testing: The All-Round Blessings of Dynamic Sparsity Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
NeurIPS 2022 Dynamic Sparse Network for Time Series Classification: Learning What to “See” Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu
ICLR 2022 The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy
LoG 2022 You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
ICML 2021 Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
ICML 2021 Selfish Sparse RNN Training Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy
NeurIPS 2021 Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
IJCAI 2020 Learning Sparse Neural Networks for Better Generalization Shiwei Liu
ECML-PKDD 2020 Topological Insights into Sparse Neural Networks Shiwei Liu, Tim van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu