Chen, Xiaohan

29 publications

ICML 2025 Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs Ziang Chen, Xiaohan Chen, Jialin Liu, Xinshang Wang, Wotao Yin
TMLR 2024 DIG-MILP: A Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee Haoyu Peter Wang, Jialin Liu, Xiaohan Chen, Xinshang Wang, Pan Li, Wotao Yin
NeurIPS 2024 Rethinking the Capacity of Graph Neural Networks for Branching Strategy Ziang Chen, Jialin Liu, Xiaohan Chen, Xinshang Wang, Wotao Yin
TMLR 2023 Chasing Better Deep Image Priors Between Over- and Under-Parameterization Qiming Wu, Xiaohan Chen, Yifan Jiang, Zhangyang Wang
CVPRW 2023 Many-Task Federated Learning: A New Problem Setting and a Simple Baseline Ruisi Cai, Xiaohan Chen, Shiwei Liu, Jayanth Srinivasa, Myungjin Lee, Ramana Kompella, Zhangyang Wang
ICLR 2023 More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 Using Sparsity Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Constantin Mocanu, Zhangyang Wang
AAAI 2023 Safeguarded Learned Convex Optimization Howard Heaton, Xiaohan Chen, Zhangyang Wang, Wotao Yin
ICML 2023 Towards Constituting Mathematical Structures for Learning to Optimize Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, Hanqin Cai
ICLR 2022 Deep Ensembling with No Overhead for Either Training or Testing: The All-Round Blessings of Dynamic Sparsity Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
AAAI 2022 Federated Dynamic Sparse Training: Computing Less, Communicating Less, yet Learning Better Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen
JMLR 2022 Learning to Optimize: A Primer and a Benchmark Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu, Zhangyang Wang, Wotao Yin
ICLR 2022 Peek-a-Boo: What (More) Is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently Xiaohan Chen, Jason Zhang, Zhangyang Wang
NeurIPS 2022 Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection Without Clean Datasets Ruisi Cai, Zhenyu Zhang, Tianlong Chen, Xiaohan Chen, Zhangyang Wang
ICLR 2022 The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy
ICLR 2021 A Design Space Study for LISTA and Beyond Tianjian Meng, Xiaohan Chen, Yifan Jiang, Zhangyang Wang
NeurIPS 2021 Hyperparameter Tuning Is All You Need for LISTA Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin
ICLR 2021 Learning a Minimax Optimizer: A Pilot Study Jiayi Shen, Xiaohan Chen, Howard Heaton, Tianlong Chen, Jialin Liu, Wotao Yin, Zhangyang Wang
NeurIPS 2021 Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot? Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang
NeurIPS 2021 Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
NeurIPS 2021 The Elastic Lottery Ticket Hypothesis Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, Jingjing Liu, Zhangyang Wang
ICLR 2020 Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Richard G. Baraniuk, Zhangyang Wang, Yingyan Lin
NeurIPS 2020 MATE: Plugging in Model Awareness to Task Embedding for Meta Learning Xiaohan Chen, Zhangyang Wang, Siyu Tang, Krikamol Muandet
NeurIPS 2020 ShiftAddNet: A Hardware-Inspired Deep Network Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin
AISTATS 2020 Uncertainty Quantification for Deep Context-Aware Mobile Activity Recognition and Unknown Context Discovery Zepeng Huo, Arash PakBin, Xiaohan Chen, Nathan Hurley, Ye Yuan, Xiaoning Qian, Zhangyang Wang, Shuai Huang, Bobak Mortazavi
ICLR 2019 ALISTA: Analytic Weights Are as Good as Learned Weights in LISTA Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin
NeurIPS 2019 E2-Train: Training State-of-the-Art CNNs with over 80% Energy Savings Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, Zhangyang Wang
ICML 2019 Plug-and-Play Methods Provably Converge with Properly Trained Denoisers Ernest Ryu, Jialin Liu, Sicheng Wang, Xiaohan Chen, Zhangyang Wang, Wotao Yin
NeurIPS 2018 Can We Gain More from Orthogonality Regularizations in Training Deep Networks? Nitin Bansal, Xiaohan Chen, Zhangyang Wang
NeurIPS 2018 Theoretical Linear Convergence of Unfolded ISTA and Its Practical Weights and Thresholds Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin