Li, Zhize

23 publications

NeurIPS 2025 Coresets for Clustering Under Stochastic Noise Lingxiao Huang, Zhize Li, Nisheeth K. Vishnoi, Runkai Yang, Haoyu Zhao
JMLR 2025 EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
AAAI 2025 EFSkip: A New Error Feedback with Linear Speedup for Compressed Federated Learning with Arbitrary Data Heterogeneity Hongyan Bao, Pengwen Chen, Ying Sun, Zhize Li
IJCAI 2025 SIFAR: A Simple Faster Accelerated Variance-Reduced Gradient Method Zhize Li
AISTATS 2024 Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression Sijin Chen, Zhize Li, Yuejie Chi
ICML 2022 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation Peter Richtarik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov
NeurIPS 2022 BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtarik, Yuejie Chi
NeurIPS 2022 Coresets for Vertical Federated Learning: Regularized Linear Regression and $k$-Means Clustering Lingxiao Huang, Zhize Li, Jialin Sun, Haoyu Zhao
JMLR 2022 Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization Zhize Li, Jian Li
NeurIPS 2022 SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression Zhize Li, Haoyu Zhao, Boyue Li, Yuejie Chi
NeurIPS 2021 CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Zhize Li, Peter Richtarik
ICML 2021 MARINA: Faster Non-Convex Distributed Learning with Compression Eduard Gorbunov, Konstantin P. Burlachenko, Zhize Li, Peter Richtarik
ICML 2021 PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtarik
AISTATS 2020 A Fast Anderson-Chebyshev Acceleration for Nonlinear Optimization Zhize Li, Jian Li
ICML 2020 Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtarik
NeurIPS 2019 A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization Guanghui Lan, Zhize Li, Yi Zhou
IJCAI 2019 Gradient Boosting with Piece-Wise Linear Regression Trees Yu Shi, Jian Li, Zhize Li
ICLR 2019 Learning Two-Layer Neural Networks with Symmetric Inputs Rong Ge, Rohith Kuditipudi, Zhize Li, Xiang Wang
NeurIPS 2019 SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points Zhize Li
COLT 2019 Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization Rong Ge, Zhize Li, Weiyao Wang, Xiang Wang
MLJ 2019 Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference Zhize Li, Tianyi Zhang, Shuyu Cheng, Jun Zhu, Jian Li
NeurIPS 2018 A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization Zhize Li, Jian Li
NeurIPS 2015 On Top-K Selection in Multi-Armed Bandits and Hidden Bipartite Graphs Wei Cao, Jian Li, Yufei Tao, Zhize Li