Condat, Laurent

11 publications

TMLR 2025 Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning Kai Yi, Laurent Condat, Peter Richtárik
TMLR 2025 FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik
ICLR 2025 LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression Laurent Condat, Arto Maranjyan, Peter Richtárik
NeurIPSW 2024 LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression Laurent Condat, Arto Maranjyan, Peter Richtárik
NeurIPSW 2024 Stochastic Proximal Point Methods for Monotone Inclusions Under Expected Similarity Abdurakhmon Sadiev, Laurent Condat, Peter Richtárik
ICLR 2023 RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates Laurent Condat, Peter Richtárik
NeurIPSW 2023 TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation Laurent Condat, Ivan Agarský, Grigory Malinovsky, Peter Richtárik
AISTATS 2022 An Optimal Algorithm for Strongly Convex Minimization Under Affine Constraints Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtarik
NeurIPS 2022 EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization Laurent Condat, Kai Yi, Peter Richtarik
NeurIPSW 2022 RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates Laurent Condat, Peter Richtárik
ICML 2020 From Local SGD to Local Fixed-Point Methods for Federated Learning Grigory Malinovskiy, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtarik