Gorbunov, Eduard

41 publications

ICML 2025 Clipping Improves Adam-Norm and AdaGrad-Norm When the Noise Is Heavy-Tailed Savelii Chezhegov, Klyukin Yaroslav, Andrei Semenov, Aleksandr Beznosikov, Alexander Gasnikov, Samuel Horváth, Martin Takáč, Eduard Gorbunov
JMLR 2025 EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
NeurIPS 2025 Error Feedback Under $(L_0,L_1)$-Smoothness: Normalization and Momentum Sarit Khirirat, Abdurakhmon Sadiev, Artem Riabinin, Eduard Gorbunov, Peter Richtárik
ICLRW 2025 Initialization Using Update Approximation Is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning Kaustubh Ponkshe, Raghav Singhal, Eduard Gorbunov, Alexey Tumanov, Samuel Horváth, Praneeth Vepakomma
ICLR 2025 Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takáč
ICLR 2025 Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Yury Demidovich, Petr Ostroukhov, Grigory Malinovsky, Samuel Horváth, Martin Takáč, Peter Richtárik, Eduard Gorbunov
TMLR 2025 Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth
AISTATS 2024 Breaking the Heavy-Tailed Noise Barrier in Stochastic Optimization Problems Nikita Puchkin, Eduard Gorbunov, Nickolay Kutuzov, Alexander Gasnikov
ICLRW 2024 Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just CLIP Gradient Differences Grigory Malinovsky, Eduard Gorbunov, Samuel Horváth, Peter Richtárik
NeurIPS 2024 Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just CLIP Gradient Differences Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
AISTATS 2024 Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtarik
NeurIPSW 2024 Communication-Efficient Algorithms Under Generalized Smoothness Assumptions Sarit Khirirat, Abdurakhmon Sadiev, Artem Riabinin, Eduard Gorbunov, Peter Richtárik
NeurIPS 2024 Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik
NeurIPS 2024 Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations Artem Agafonov, Petr Ostroukhov, Roman Mozhaev, Konstantin Yakovlev, Eduard Gorbunov, Martin Takáč, Alexander Gasnikov, Dmitry Kamzolov
ICML 2024 High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
NeurIPS 2024 Remove That Square Root: A New Efficient Scale-Invariant Version of AdaGrad Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horváth, Martin Takáč, Eduard Gorbunov
NeurIPS 2023 Accelerated Zeroth-Order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance Nikita Kornilov, Ohad Shamir, Aleksandr Lobanov, Darina Dvinskikh, Alexander Gasnikov, Innokentiy Shibaev, Eduard Gorbunov, Samuel Horváth
NeurIPS 2023 Byzantine-Tolerant Methods for Distributed Variational Inequalities Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takac, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov
ICML 2023 Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: The Case of Negative Comonotonicity Eduard Gorbunov, Adrien Taylor, Samuel Horváth, Gauthier Gidel
ICMLW 2023 Federated Optimization Algorithms with Random Reshuffling and Gradient Compression Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Pavlovich Burlachenko, Peter Richtárik
ICML 2023 High-Probability Bounds for Stochastic Optimization and Variational Inequalities: The Case of Unbounded Variance Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
NeurIPS 2023 Single-Call Stochastic Extragradient Methods for Structured Non-Monotone Variational Inequalities: Improved Analysis Under Weaker Conditions Sayantan Choudhury, Eduard Gorbunov, Nicolas Loizou
AISTATS 2023 Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou
ICLR 2023 Variance Reduction Is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
AISTATS 2022 Extragradient Method: O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections with Cocoercivity Eduard Gorbunov, Nicolas Loizou, Gauthier Gidel
AISTATS 2022 Stochastic Extragradient: General Analysis and Improved Rates Eduard Gorbunov, Hugo Berard, Gauthier Gidel, Nicolas Loizou
ICML 2022 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation Peter Richtarik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov
NeurIPS 2022 Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise Eduard Gorbunov, Marina Danilova, David Dobre, Pavel Dvurechenskii, Alexander Gasnikov, Gauthier Gidel
NeurIPS 2022 Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities Eduard Gorbunov, Adrien Taylor, Gauthier Gidel
ICML 2022 Secure Distributed Training at Scale Eduard Gorbunov, Alexander Borzunov, Michael Diskin, Max Ryabinin
NeurIPSW 2022 Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, Nicolas Loizou
AISTATS 2021 Local SGD: Unified Theory and New Efficient Methods Eduard Gorbunov, Filip Hanzely, Peter Richtarik
ICML 2021 MARINA: Faster Non-Convex Distributed Learning with Compression Eduard Gorbunov, Konstantin P. Burlachenko, Zhize Li, Peter Richtarik
NeurIPS 2021 Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, Gennady Pekhimenko
ICLR 2020 A Stochastic Derivative Free Optimization Method with Momentum Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtarik
AISTATS 2020 A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent Eduard Gorbunov, Filip Hanzely, Peter Richtarik
NeurIPS 2020 Linearly Converging Error Compensated SGD Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtarik
NeurIPS 2020 Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping Eduard Gorbunov, Marina Danilova, Alexander Gasnikov
COLT 2019 Near Optimal Methods for Minimizing Convex Functions with Lipschitz $p$-Th Derivatives Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, Aaron Sidford
COLT 2019 Optimal Tensor Methods in Smooth Convex and Uniformly ConvexOptimization Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A. Uribe
NeurIPS 2018 Stochastic Spectral and Conjugate Descent Methods Dmitry Kovalev, Peter Richtarik, Eduard Gorbunov, Elnur Gasanov