Richtarik, Peter

161 publications

ICML 2025 ATA: Adaptive Task Allocation for Efficient Resource Management in Distributed Machine Learning Arto Maranjyan, El Mehdi Saad, Peter Richtárik, Francesco Orabona
ECML-PKDD 2025 Collaborative Value Function Estimation Under Model Mismatch: A Federated Temporal Difference Analysis Ali Beikmohammadi, Sarit Khirirat, Peter Richtárik, Sindri Magnússon
UAI 2025 Correlated Quantization for Faster Nonconvex Distributed Optimization Andrei Panferov, Yury Demidovich, Ahmad Rammal, Peter Richtárik
JMLR 2025 EF21 with Bells & Whistles: Six Algorithmic Extensions of Modern Error Feedback Ilyas Fatkhullin, Igor Sokolov, Eduard Gorbunov, Zhize Li, Peter Richtárik
UAI 2025 ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression Avetik Karagulyan, Peter Richtárik
NeurIPS 2025 Error Feedback Under $(L_0,L_1)$-Smoothness: Normalization and Momentum Sarit Khirirat, Abdurakhmon Sadiev, Artem Riabinin, Eduard Gorbunov, Peter Richtárik
TMLR 2025 Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning Kai Yi, Laurent Condat, Peter Richtárik
TMLR 2025 FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models Kai Yi, Georg Meinhardt, Laurent Condat, Peter Richtárik
TMLR 2025 GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity Arto Maranjyan, Mher Safaryan, Peter Richtárik
ICLR 2025 LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression Laurent Condat, Arto Maranjyan, Peter Richtárik
NeurIPS 2025 Local Curvature Descent: Squeezing More Curvature Out of Standard and Polyak Gradient Descent Peter Richtárik, Simone Maria Giancola, Dymitr Lubczyk, Robin Yadav
ICLR 2025 MAST: Model-Agnostic Sparsified Training Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtárik
ICLR 2025 Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takáč
ICLR 2025 Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Yury Demidovich, Petr Ostroukhov, Grigory Malinovsky, Samuel Horváth, Martin Takáč, Peter Richtárik, Eduard Gorbunov
UAI 2025 MindFlayer SGD: Efficient Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times Arto Maranjyan, Omar Shaikh Omar, Peter Richtárik
ICML 2025 Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity Arto Maranjyan, Alexander Tyurin, Peter Richtárik
NeurIPS 2025 Second-Order Optimization Under Heavy-Tailed Noise: Hessian Clipping and Sample Complexity Limits Abdurakhmon Sadiev, Peter Richtárik, Ilyas Fatkhullin
ICLRW 2025 Symmetric Pruning for Large Language Models Kai Yi, Peter Richtárik
ICLRW 2024 Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just CLIP Gradient Differences Grigory Malinovsky, Eduard Gorbunov, Samuel Horváth, Peter Richtárik
NeurIPS 2024 Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just CLIP Gradient Differences Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
NeurIPSW 2024 Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning Kai Yi, Timur Kharisov, Igor Sokolov, Peter Richtárik
AISTATS 2024 Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates Ahmad Rammal, Kaja Gruntkowska, Nikita Fedin, Eduard Gorbunov, Peter Richtarik
NeurIPSW 2024 Communication-Efficient Algorithms Under Generalized Smoothness Assumptions Sarit Khirirat, Abdurakhmon Sadiev, Artem Riabinin, Eduard Gorbunov, Peter Richtárik
ICLR 2024 Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization Hanmin Li, Avetik Karagulyan, Peter Richtárik
NeurIPSW 2024 Differentially Private Random Block Coordinate Descent Arto Maranjyan, Abdurakhmon Sadiev, Peter Richtárik
NeurIPS 2024 Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik
ICLR 2024 Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants Peter Richtárik, Elnur Gasanov, Konstantin Pavlovich Burlachenko
ICLR 2024 FedP3: Federated Personalized and Privacy-Friendly Network Pruning Under Model Heterogeneity Kai Yi, Nidham Gazagnadou, Peter Richtárik, Lingjuan Lyu
TMLR 2024 Federated Sampling with Langevin Algorithm Under Isoperimetry Lukang Sun, Adil Salim, Peter Richtárik
NeurIPS 2024 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations Alexander Tyurin, Kaja Gruntkowska, Peter Richtárik
ICML 2024 High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
NeurIPS 2024 Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization Under Function Similarity Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik
NeurIPSW 2024 LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression Laurent Condat, Arto Maranjyan, Peter Richtárik
NeurIPSW 2024 Local Curvature Descent: Squeezing More Curvature Out of Standard and Polyak Gradient Descent Peter Richtárik, Simone Maria Giancola, Dymitr Lubczyk, Robin Yadav
NeurIPS 2024 MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence Ionut-Vlad Modoranu, Mher Safaryan, Grigory Malinovsky, Eldar Kurtic, Thomas Robert, Peter Richtárik, Dan Alistarh
NeurIPSW 2024 MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times Arto Maranjyan, Omar Shaikh Omar, Peter Richtárik
AAAI 2024 Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou
NeurIPSW 2024 On the Convergence of DP-SGD with Adaptive Clipping Egor Shulgin, Peter Richtárik
NeurIPSW 2024 On the Convergence of FedProx with Extrapolation and Inexact Prox Hanmin Li, Peter Richtárik
NeurIPS 2024 On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization Alexander Tyurin, Peter Richtárik
NeurIPS 2024 PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression Vladimir Malinovskii, Denis Mazur, Ivan Ilin, Denis Kuznedelev, Konstantin Burlachenko, Kai Yi, Dan Alistarh, Peter Richtarik
NeurIPSW 2024 SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Nonconvex Cross-Device Federated Learning Avetik Karagulyan, Egor Shulgin, Abdurakhmon Sadiev, Peter Richtárik
NeurIPS 2024 Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity Alexander Tyurin, Marta Pozzi, Ivan Ilin, Peter Richtárik
NeurIPSW 2024 Stochastic Proximal Point Methods for Monotone Inclusions Under Expected Similarity Abdurakhmon Sadiev, Laurent Condat, Peter Richtárik
NeurIPS 2024 The Power of Extrapolation in Federated Learning Hanmin Li, Kirill Acharya, Peter Richtárik
ICML 2024 Towards a Better Theoretical Understanding of Independent Subnetwork Training Egor Shulgin, Peter Richtárik
AISTATS 2024 Understanding Progressive Training Through the Framework of Randomized Coordinate Descent Rafał Szlendak, Elnur Gasanov, Peter Richtarik
NeurIPS 2023 2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression Alexander Tyurin, Peter Richtarik
NeurIPS 2023 A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting Alexander Tyurin, Peter Richtarik
NeurIPS 2023 A Guide Through the Zoo of Biased SGD Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtarik
TMLR 2023 AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods Zheng Shi, Abdurakhmon Sadiev, Nicolas Loizou, Peter Richtárik, Martin Takáč
TMLR 2023 Adaptive Compression for Communication-Efficient Distributed Training Maksim Makarenko, Elnur Gasanov, Abdurakhmon Sadiev, Rustem Islamov, Peter Richtárik
TMLR 2023 Better Theory for SGD in the Nonconvex World Ahmed Khaled, Peter Richtárik
AISTATS 2023 Can 5th Generation Local Training Methods Support Client Sampling? Yes! Michał Grudzień, Grigory Malinovsky, Peter Richtarik
AISTATS 2023 Catalyst Acceleration of Error Compensated Methods Leads to Better Communication Complexity Xun Qian, Hanze Dong, Tong Zhang, Peter Richtarik
ICMLW 2023 Convergence of First-Order Algorithms for Meta-Learning with Moreau Envelopes Konstantin Mishchenko, Slavomir Hanzely, Peter Richtárik
AISTATS 2023 Convergence of Stein Variational Gradient Descent Under a Weaker Smoothness Condition Lukang Sun, Avetik Karagulyan, Peter Richtarik
ICLR 2023 DASHA: Distributed Nonconvex Optimization with Communication Compression and Optimal Oracle Complexity Alexander Tyurin, Peter Richtárik
NeurIPSW 2023 Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization Hanmin Li, Avetik Karagulyan, Peter Richtárik
TMLR 2023 Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation Rustem Islamov, Xun Qian, Slavomir Hanzely, Mher Safaryan, Peter Richtárik
ICML 2023 EF21-P and Friends: Improved Theoretical Communication Complexity for Distributed Optimization with Bidirectional Compression Kaja Gruntkowska, Alexander Tyurin, Peter Richtárik
ICMLW 2023 ELF: Federated Langevin Algorithms with Primal, Dual and Bidirectional Compression Avetik Karagulyan, Peter Richtárik
ICMLW 2023 Federated Learning with Regularized Client Participation Grigory Malinovsky, Samuel Horváth, Konstantin Pavlovich Burlachenko, Peter Richtárik
ICMLW 2023 Federated Optimization Algorithms with Random Reshuffling and Gradient Compression Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Pavlovich Burlachenko, Peter Richtárik
ICML 2023 High-Probability Bounds for Stochastic Optimization and Variational Inequalities: The Case of Unbounded Variance Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
NeurIPSW 2023 Improved Stein Variational Gradient Descent with Importance Weights Lukang Sun, Peter Richtárik
ICMLW 2023 Improving Accelerated Federated Learning with Compression and Importance Sampling Michał Grudzień, Grigory Malinovsky, Peter Richtárik
NeurIPSW 2023 MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization Hanmin Li, Avetik Karagulyan, Peter Richtárik
NeurIPS 2023 Momentum Provably Improves Error Feedback! Ilyas Fatkhullin, Alexander Tyurin, Peter Richtarik
ICMLW 2023 Momentum Provably Improves Error Feedback! Ilyas Fatkhullin, Alexander Tyurin, Peter Richtárik
JMLR 2023 On Biased Compression for Distributed Learning Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan
NeurIPS 2023 Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model Alexander Tyurin, Peter Richtarik
TMLR 2023 Personalized Federated Learning with Communication Compression El houcine Bergou, Konstantin Pavlovich Burlachenko, Aritra Dutta, Peter Richtárik
ICLR 2023 RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates Laurent Condat, Peter Richtárik
UAI 2023 Random Reshuffling with Variance Reduction: New Analysis and Better Rates Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
TMLR 2023 Sharper Rates and Flexible Framework for Nonconvex SGD with Client and Data Sampling Alexander Tyurin, Lukang Sun, Konstantin Pavlovich Burlachenko, Peter Richtárik
NeurIPSW 2023 TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation Laurent Condat, Ivan Agarský, Grigory Malinovsky, Peter Richtárik
ICMLW 2023 Towards a Better Theoretical Understanding of Independent Subnetwork Training Egor Shulgin, Peter Richtárik
NeurIPSW 2023 Towards a Better Theoretical Understanding of Independent Subnetwork Training Egor Shulgin, Peter Richtárik
ICLR 2023 Variance Reduction Is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
AISTATS 2022 An Optimal Algorithm for Strongly Convex Minimization Under Affine Constraints Adil Salim, Laurent Condat, Dmitry Kovalev, Peter Richtarik
AISTATS 2022 Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning Xun Qian, Rustem Islamov, Mher Safaryan, Peter Richtarik
AISTATS 2022 FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtarik
ICML 2022 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation Peter Richtarik, Igor Sokolov, Elnur Gasanov, Ilyas Fatkhullin, Zhize Li, Eduard Gorbunov
ICML 2022 A Convergence Theory for SVGD in the Population Limit Under Talagrand’s Inequality T1 Adil Salim, Lukang Sun, Peter Richtarik
NeurIPS 2022 A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander Gasnikov, Peter Richtarik, Martin Takac
NeurIPS 2022 Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling Dmitry Kovalev, Alexander Gasnikov, Peter Richtarik
NeurIPS 2022 BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression Haoyu Zhao, Boyue Li, Zhize Li, Peter Richtarik, Yuejie Chi
NeurIPSW 2022 Certified Robustness in Federated Learning Motasem Alfarra, Juan Camilo Perez, Egor Shulgin, Peter Richtárik, Bernard Ghanem
NeurIPS 2022 Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox Abdurakhmon Sadiev, Dmitry Kovalev, Peter Richtarik
NeurIPS 2022 Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees Aleksandr Beznosikov, Peter Richtarik, Michael Diskin, Max Ryabinin, Alexander Gasnikov
ICLR 2022 Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takac
NeurIPS 2022 EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization Laurent Condat, Kai Yi, Peter Richtarik
ICML 2022 FedNL: Making Newton-Type Methods Applicable to Federated Learning Mher Safaryan, Rustem Islamov, Xun Qian, Peter Richtarik
TMLR 2022 FedShuffle: Recipes for Better Use of Local Work in Federated Learning Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat
ICLR 2022 IntSGD: Adaptive Floatless Compression of Stochastic Gradients Konstantin Mishchenko, Bokun Wang, Dmitry Kovalev, Peter Richtárik
NeurIPS 2022 Optimal Algorithms for Decentralized Stochastic Variational Inequalities Dmitry Kovalev, Aleksandr Beznosikov, Abdurakhmon Sadiev, Michael Persiianov, Peter Richtarik, Alexander Gasnikov
TMLR 2022 Optimal Client Sampling for Federated Learning Wenlin Chen, Samuel Horváth, Peter Richtárik
ICLR 2022 Permutation Compressors for Provably Faster Distributed Nonconvex Optimization Rafał Szlendak, Alexander Tyurin, Peter Richtárik
ICML 2022 ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally! Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtarik
ICML 2022 Proximal and Federated Random Reshuffling Konstantin Mishchenko, Ahmed Khaled, Peter Richtarik
NeurIPSW 2022 RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates Laurent Condat, Peter Richtárik
UAI 2022 Shifted Compression Framework: Generalizations and Improvements Egor Shulgin, Peter Richtárik
NeurIPS 2022 Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques Bokun Wang, Mher Safaryan, Peter Richtarik
NeurIPS 2022 Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning Grigory Malinovsky, Kai Yi, Peter Richtarik
AISTATS 2021 A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtarik, Sebastian Stich
AISTATS 2021 Hyperparameter Transfer Learning with Adaptive Complexity Samuel Horváth, Aaron Klein, Peter Richtarik, Cedric Archambeau
AISTATS 2021 Local SGD: Unified Theory and New Efficient Methods Eduard Gorbunov, Filip Hanzely, Peter Richtarik
ICLR 2021 A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning Samuel Horváth, Peter Richtarik
ICML 2021 ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks Dmitry Kovalev, Egor Shulgin, Peter Richtarik, Alexander V Rogozin, Alexander Gasnikov
NeurIPS 2021 CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Zhize Li, Peter Richtarik
ICML 2021 Distributed Second Order Methods with Fast Rates and Compressed Communication Rustem Islamov, Xun Qian, Peter Richtarik
NeurIPS 2021 EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback Peter Richtarik, Igor Sokolov, Ilyas Fatkhullin
NeurIPS 2021 Error Compensated Distributed SGD Can Be Accelerated Xun Qian, Peter Richtarik, Tong Zhang
NeurIPSW 2021 FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik
JMLR 2021 L-SVRG and L-Katyusha with Arbitrary Sampling Xun Qian, Zheng Qu, Peter Richtárik
NeurIPS 2021 Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization over Time-Varying Networks Dmitry Kovalev, Elnur Gasanov, Alexander Gasnikov, Peter Richtarik
ICML 2021 MARINA: Faster Non-Convex Distributed Learning with Compression Eduard Gorbunov, Konstantin P. Burlachenko, Zhize Li, Peter Richtarik
ICML 2021 PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization Zhize Li, Hongyan Bao, Xiangliang Zhang, Peter Richtarik
NeurIPS 2021 Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization Mher Safaryan, Filip Hanzely, Peter Richtarik
ICML 2021 Stochastic Sign Descent Methods: New Algorithms and Better Theory Mher Safaryan, Peter Richtarik
UAI 2020 99% of Worker-Master Communication in Distributed Optimization Is Not Needed Konstantin Mishchenko, Filip Hanzely, Peter Richtarik
ICLR 2020 A Stochastic Derivative Free Optimization Method with Momentum Eduard Gorbunov, Adel Bibi, Ozan Sener, El Houcine Bergou, Peter Richtarik
AAAI 2020 A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem, Peter Richtárik
AISTATS 2020 A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent Eduard Gorbunov, Filip Hanzely, Peter Richtarik
ICML 2020 Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization Zhize Li, Dmitry Kovalev, Xun Qian, Peter Richtarik
ALT 2020 Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha Are Better Without the Outer Loop Dmitry Kovalev, Samuel Horváth, Peter Richtárik
ICML 2020 From Local SGD to Local Fixed-Point Methods for Federated Learning Grigory Malinovskiy, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtarik
NeurIPS 2020 Linearly Converging Error Compensated SGD Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtarik
NeurIPS 2020 Lower Bounds and Optimal Algorithms for Personalized Federated Learning Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik
NeurIPS 2020 Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization Dmitry Kovalev, Adil Salim, Peter Richtarik
NeurIPS 2020 Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm Adil Salim, Peter Richtarik
NeurIPS 2020 Random Reshuffling: Simple Analysis with Vast Improvements Konstantin Mishchenko, Ahmed Khaled, Peter Richtarik
AISTATS 2020 Revisiting Stochastic Extragradient Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtarik, Yura Malitsky
ICML 2020 Stochastic Subspace Cubic Newton Method Filip Hanzely, Nikita Doikov, Yurii Nesterov, Peter Richtarik
AISTATS 2020 Tighter Theory for Local SGD on Identical and Heterogeneous Data Ahmed Khaled, Konstantin Mishchenko, Peter Richtarik
ICML 2020 Variance Reduced Coordinate Descent with Acceleration: New Method with a Surprising Application to Finite-Sum Problems Filip Hanzely, Dmitry Kovalev, Peter Richtarik
AAAI 2019 A Nonconvex Projection Method for Robust PCA Aritra Dutta, Filip Hanzely, Peter Richtárik
AISTATS 2019 Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches Filip Hanzely, Peter Richtarik
JMLR 2019 New Convergence Aspects of Stochastic Gradient Algorithms Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk
WACV 2019 Online and Batch Supervised Background Estimation via L1 Regression Aritra Dutta, Peter Richtárik
NeurIPS 2019 RSN: Randomized Subspace Newton Robert Gower, Dmitry Kovalev, Felix Lieder, Peter Richtarik
ICML 2019 SAGA with Arbitrary Sampling Xun Qian, Zheng Qu, Peter Richtárik
NeurIPS 2019 Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates Adil Salim, Dmitry Kovalev, Peter Richtarik
NeurIPS 2018 Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization Robert Gower, Filip Hanzely, Peter Richtarik, Sebastian U Stich
ALT 2018 Coordinate Descent Faceoff: Primal or Dual? Dominik Csiba, Peter Richtárik
JMLR 2018 Importance Sampling for Minibatches Dominik Csiba, Peter Richtárik
ICML 2018 Randomized Block Cubic Newton Method Nikita Doikov, Peter Richtarik, University Edinburgh
NeurIPS 2018 SEGA: Variance Reduction via Gradient Sketching Filip Hanzely, Konstantin Mishchenko, Peter Richtarik
ICML 2018 SGD and Hogwild! Convergence Without the Bounded Gradients Assumption Lam Nguyen, Phuong Ha Nguyen, Marten Dijk, Peter Richtarik, Katya Scheinberg, Martin Takac
NeurIPS 2018 Stochastic Spectral and Conjugate Descent Methods Dmitry Kovalev, Peter Richtarik, Eduard Gorbunov, Elnur Gasanov
ICCVW 2017 A Batch-Incremental Video Background Estimation Model Using Weighted Low-Rank Approximation of Matrices Xin Li, Aritra Dutta, Peter Richtárik
JMLR 2016 Distributed Coordinate Descent Method for Learning with Big Data Peter Richtárik, Martin Takáč
ICML 2016 Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling Zeyuan Allen-Zhu, Zheng Qu, Peter Richtarik, Yang Yuan
ICML 2016 SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization Zheng Qu, Peter Richtarik, Martin Takac, Olivier Fercoq
ICML 2016 Stochastic Block BFGS: Squeezing More Curvature Out of Data Robert Gower, Donald Goldfarb, Peter Richtarik
ICML 2015 Adding vs. Averaging in Distributed Primal-Dual Optimization Chenxin Ma, Virginia Smith, Martin Jaggi, Michael Jordan, Peter Richtarik, Martin Takac
NeurIPS 2015 Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling Zheng Qu, Peter Richtarik, Tong Zhang
ICML 2015 Stochastic Dual Coordinate Ascent with Adaptive Probabilities Dominik Csiba, Zheng Qu, Peter Richtarik
ICML 2013 Mini-Batch Primal and Dual Methods for SVMs Martin Takac, Avleen Bijral, Peter Richtarik, Nati Srebro
JMLR 2010 Generalized Power Method for Sparse Principal Component Analysis Michel Journée, Yurii Nesterov, Peter Richtárik, Rodolphe Sepulchre