Zhang, Xinwei
20 publications
NeurIPSW
2024
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
NeurIPSW
2024
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
NeurIPS
2024
DOPPLER: Differentially Private Optimizers with Low-Pass Filter for Privacy Noise Reduction
NeurIPSW
2024
DiSK: Differentially Private Optimizer with Simplified Kalman Filter for Noise Reduction
TMLR
2024
Hybrid Federated Learning for Feature & Sample Heterogeneity: Algorithms and Implementation
ICML
2023
FedAvg Converges to Zero Training Loss Linearly for Overparameterized Multi-Layer Neural Networks
ICML
2022
A Stochastic Multi-Rate Control Framework for Modeling Distributed Optimization Algorithms