Malinovsky, Grigory

17 publications

UAI 2025 An Optimal Algorithm for Strongly Convex Min-Min Optimization Dmitry Kovalev, Alexander Gasnikov, Grigory Malinovsky
ICLR 2025 MAST: Model-Agnostic Sparsified Training Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtárik
ICLR 2025 Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Yury Demidovich, Petr Ostroukhov, Grigory Malinovsky, Samuel Horváth, Martin Takáč, Peter Richtárik, Eduard Gorbunov
ICLRW 2024 Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just CLIP Gradient Differences Grigory Malinovsky, Eduard Gorbunov, Samuel Horváth, Peter Richtárik
NeurIPS 2024 Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just CLIP Gradient Differences Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
NeurIPS 2024 Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik
NeurIPS 2024 MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence Ionut-Vlad Modoranu, Mher Safaryan, Grigory Malinovsky, Eldar Kurtic, Thomas Robert, Peter Richtárik, Dan Alistarh
AAAI 2024 Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou
NeurIPS 2023 A Guide Through the Zoo of Biased SGD Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtarik
AISTATS 2023 Can 5th Generation Local Training Methods Support Client Sampling? Yes! Michał Grudzień, Grigory Malinovsky, Peter Richtarik
ICMLW 2023 Federated Learning with Regularized Client Participation Grigory Malinovsky, Samuel Horváth, Konstantin Pavlovich Burlachenko, Peter Richtárik
ICMLW 2023 Federated Optimization Algorithms with Random Reshuffling and Gradient Compression Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Pavlovich Burlachenko, Peter Richtárik
ICMLW 2023 Improving Accelerated Federated Learning with Compression and Importance Sampling Michał Grudzień, Grigory Malinovsky, Peter Richtárik
UAI 2023 Random Reshuffling with Variance Reduction: New Analysis and Better Rates Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
NeurIPSW 2023 TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation Laurent Condat, Ivan Agarský, Grigory Malinovsky, Peter Richtárik
ICML 2022 ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtarik
NeurIPS 2022 Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning Grigory Malinovsky, Kai Yi, Peter Richtarik