Hanzely, Filip

12 publications

TMLR 2023 Personalized Federated Learning: A Unified Framework and Universal Optimization Techniques Filip Hanzely, Boxin Zhao, Mladen Kolar
AISTATS 2021 Local SGD: Unified Theory and New Efficient Methods Eduard Gorbunov, Filip Hanzely, Peter Richtarik
NeurIPS 2021 Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization Mher Safaryan, Filip Hanzely, Peter Richtarik
UAI 2020 99% of Worker-Master Communication in Distributed Optimization Is Not Needed Konstantin Mishchenko, Filip Hanzely, Peter Richtarik
AISTATS 2020 A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent Eduard Gorbunov, Filip Hanzely, Peter Richtarik
NeurIPS 2020 Lower Bounds and Optimal Algorithms for Personalized Federated Learning Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik
ICML 2020 Stochastic Subspace Cubic Newton Method Filip Hanzely, Nikita Doikov, Yurii Nesterov, Peter Richtarik
ICML 2020 Variance Reduced Coordinate Descent with Acceleration: New Method with a Surprising Application to Finite-Sum Problems Filip Hanzely, Dmitry Kovalev, Peter Richtarik
AAAI 2019 A Nonconvex Projection Method for Robust PCA Aritra Dutta, Filip Hanzely, Peter Richtárik
AISTATS 2019 Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches Filip Hanzely, Peter Richtarik
NeurIPS 2018 Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization Robert Gower, Filip Hanzely, Peter Richtarik, Sebastian U Stich
NeurIPS 2018 SEGA: Variance Reduction via Gradient Sketching Filip Hanzely, Konstantin Mishchenko, Peter Richtarik