Charles, Zachary

21 publications

NeurIPS 2025 Communication-Efficient Language Model Training Scales Reliably and Robustly: Scaling Laws for DiLoCo Zachary Charles, Gabriel Teston, Lucio M. Dery, J Keith Rush, Nova Fallen, Zachary Garrett, Arthur Szlam, Arthur Douillard
ICML 2025 Scaling Laws for Differentially Private Language Models Ryan Mckenna, Yangsibo Huang, Amer Sinha, Borja Balle, Zachary Charles, Christopher A. Choquette-Choo, Badih Ghazi, Georgios Kaissis, Ravi Kumar, Ruibo Liu, Da Yu, Chiyuan Zhang
ICMLW 2024 DrJAX: Scalable and Differentiable MapReduce Primitives in JAX J Keith Rush, Zachary Charles, Zachary Garrett, Sean Augenstein, Nicole Elyse Mitchell
JMLR 2024 Federated Automatic Differentiation Keith Rush, Zachary Charles, Zachary Garrett
ICMLW 2024 Fine-Tuning Large Language Models with User-Level Differential Privacy Zachary Charles, Arun Ganesh, Ryan McKenna, Hugh Brendan McMahan, Nicole Elyse Mitchell, Krishna Pillutla, J Keith Rush
TMLR 2024 Leveraging Function Space Aggregation for Federated Learning at Scale Nikita Dhawan, Nicole Elyse Mitchell, Zachary Charles, Zachary Garrett, Gintare Karolina Dziugaite
NeurIPS 2023 Gradient Descent with Linearly Correlated Noise: Theory and Applications to Differential Privacy Anastasiia Koloskova, Ryan McKenna, Zachary Charles, John Rush, H. Brendan McMahan
NeurIPS 2023 Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning Zachary Charles, Nicole Mitchell, Krishna Pillutla, Michael Reneer, Zachary Garrett
CVPRW 2022 Does Federated Dropout Actually Work? Gary Cheng, Zachary Charles, Zachary Garrett, Keith Rush
ALT 2022 Iterated Vector Fields and Conservatism, with Applications to Federated Learning Zachary Charles, Keith Rush
NeurIPSW 2022 Motley: Benchmarking Heterogeneity and Personalization in Federated Learning Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ken Liu, Zheng Xu, Virginia Smith
AISTATS 2021 Convergence and Accuracy Trade-Offs in Federated Learning and Meta-Learning Zachary Charles, Jakub Konečný
ICLR 2021 Adaptive Federated Optimization Sashank J. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, Hugh Brendan McMahan
FnTML 2021 Advances and Open Problems in Federated Learning Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Hang Qi, Daniel Ramage, Ramesh Raskar, Mariana Raykova, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
NeurIPS 2021 On Large-Cohort Training for Federated Learning Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian, Virginia Smith
AISTATS 2019 A Geometric Perspective on the Transferability of Adversarial Directions Zachary Charles, Harrison Rosenberg, Dimitris Papailiopoulos
NeurIPS 2019 DETOX: A Redundancy-Based Framework for Faster and More Robust Gradient Aggregation Shashank Rajput, Hongyi Wang, Zachary Charles, Dimitris Papailiopoulos
ICML 2019 Does Data Augmentation Lead to Positive Margin? Shashank Rajput, Zhili Feng, Zachary Charles, Po-Ling Loh, Dimitris Papailiopoulos
NeurIPS 2018 ATOMO: Communication-Efficient Learning via Atomic Sparsification Hongyi Wang, Scott Sievert, Shengchao Liu, Zachary Charles, Dimitris Papailiopoulos, Stephen Wright
ICML 2018 DRACO: Byzantine-Resilient Distributed Training via Redundant Gradients Lingjiao Chen, Hongyi Wang, Zachary Charles, Dimitris Papailiopoulos
ICML 2018 Stability and Generalization of Learning Algorithms That Converge to Global Optima Zachary Charles, Dimitris Papailiopoulos