Horváth, Samuel

42 publications

ICML 2025 Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks Nurbek Tastan, Samuel Horváth, Karthik Nandakumar
TMLR 2025 CYCle: Choosing Your Collaborators Wisely to Enhance Collaborative Fairness in Decentralized Learning Nurbek Tastan, Samuel Horváth, Karthik Nandakumar
ICML 2025 Clipping Improves Adam-Norm and AdaGrad-Norm When the Noise Is Heavy-Tailed Savelii Chezhegov, Klyukin Yaroslav, Andrei Semenov, Aleksandr Beznosikov, Alexander Gasnikov, Samuel Horváth, Martin Takáč, Eduard Gorbunov
CPAL 2025 Collaborative and Efficient Personalization with Mixtures of Adaptors Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč
AISTATS 2025 DPFL: Decentralized Personalized Federated Learning Salma Kharrat, Marco Canini, Samuel Horváth
ICML 2025 FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training Philip Zmushko, Aleksandr Beznosikov, Martin Takáč, Samuel Horváth
CPAL 2025 FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning Nurbek Tastan, Samuel Horváth, Martin Takáč, Karthik Nandakumar
ICLRW 2025 Initialization Using Update Approximation Is a Silver Bullet for Extremely Efficient Low-Rank Fine-Tuning Kaustubh Ponkshe, Raghav Singhal, Eduard Gorbunov, Alexey Tumanov, Samuel Horváth, Praneeth Vepakomma
ICLR 2025 Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takáč
ICLR 2025 Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Yury Demidovich, Petr Ostroukhov, Grigory Malinovsky, Samuel Horváth, Martin Takáč, Peter Richtárik, Eduard Gorbunov
TMLR 2025 Partially Personalized Federated Learning: Breaking the Curse of Data Heterogeneity Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel Horváth
AISTATS 2025 Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis Ruichen Luo, Sebastian U Stich, Samuel Horváth, Martin Takáč
CPAL 2025 Vanishing Feature: Diagnosing Model Merging and Beyond Xingyu Qu, Samuel Horváth
ICLRW 2025 Vanishing Feature: Diagnosing Model Merging and Beyond Xingyu Qu, Samuel Horváth
ICLRW 2024 Balancing Privacy and Performance for Private Federated Learning Algorithms Xiangjian Hou, Sarit Khirirat, Mohammad Yaqub, Samuel Horváth
ICLRW 2024 Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just CLIP Gradient Differences Grigory Malinovsky, Eduard Gorbunov, Samuel Horváth, Peter Richtárik
NeurIPS 2024 Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just CLIP Gradient Differences Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
IJCAI 2024 Dirichlet-Based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks Nikita Kotelevskii, Samuel Horváth, Karthik Nandakumar, Martin Takác, Maxim Panov
AISTATS 2024 Efficient Conformal Prediction Under Data Heterogeneity Vincent Plassier, Nikita Kotelevskii, Aleksandr Rubashevskii, Fedor Noskov, Maksim Velikanov, Alexander Fishkov, Samuel Horvath, Martin Takac, Eric Moulines, Maxim Panov
ICML 2024 High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
ICML 2024 Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Samuel Horváth, Stefanos Laskaridis, Shashank Rajput, Hongyi Wang
TMLR 2024 PaDPaF: Partial Disentanglement with Partially-Federated GANs Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč
IJCAI 2024 Redefining Contributions: Shapley-Driven Federated Learning Nurbek Tastan, Samar Fares, Toluwani Aremu, Samuel Horváth, Karthik Nandakumar
NeurIPS 2024 Remove That Square Root: A New Efficient Scale-Invariant Version of AdaGrad Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horváth, Martin Takáč, Eduard Gorbunov
NeurIPS 2023 Accelerated Zeroth-Order Method for Non-Smooth Stochastic Convex Optimization Problem with Infinite Variance Nikita Kornilov, Ohad Shamir, Aleksandr Lobanov, Darina Dvinskikh, Alexander Gasnikov, Innokentiy Shibaev, Eduard Gorbunov, Samuel Horváth
NeurIPS 2023 Byzantine-Tolerant Methods for Distributed Variational Inequalities Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takac, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov
ICML 2023 Convergence of Proximal Point and Extragradient-Based Methods Beyond Monotonicity: The Case of Negative Comonotonicity Eduard Gorbunov, Adrien Taylor, Samuel Horváth, Gauthier Gidel
ICMLW 2023 Federated Learning with Regularized Client Participation Grigory Malinovsky, Samuel Horváth, Konstantin Pavlovich Burlachenko, Peter Richtárik
NeurIPS 2023 Handling Data Heterogeneity via Architectural Design for Federated Visual Recognition Sara Pieri, Jose Restom, Samuel Horváth, Hisham Cholakkal
ICML 2023 High-Probability Bounds for Stochastic Optimization and Variational Inequalities: The Case of Unbounded Variance Abdurakhmon Sadiev, Marina Danilova, Eduard Gorbunov, Samuel Horváth, Gauthier Gidel, Pavel Dvurechensky, Alexander Gasnikov, Peter Richtárik
NeurIPSW 2023 Maestro: Uncovering Low-Rank Structures via Trainable Decomposition Samuel Horváth, Stefanos Laskaridis, Shashank Rajput, Hongyi Wang
JMLR 2023 On Biased Compression for Distributed Learning Aleksandr Beznosikov, Samuel Horváth, Peter Richtárik, Mher Safaryan
ICLR 2023 Variance Reduction Is an Antidote to Byzantines: Better Rates, Weaker Assumptions and Communication Compression as a Cherry on the Top Eduard Gorbunov, Samuel Horváth, Peter Richtárik, Gauthier Gidel
AISTATS 2022 FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtarik
TMLR 2022 FedShuffle: Recipes for Better Use of Local Work in Federated Learning Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat
TMLR 2022 Optimal Client Sampling for Federated Learning Wenlin Chen, Samuel Horváth, Peter Richtárik
AISTATS 2021 Hyperparameter Transfer Learning with Adaptive Complexity Samuel Horváth, Aaron Klein, Peter Richtarik, Cedric Archambeau
ICLR 2021 A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning Samuel Horváth, Peter Richtarik
NeurIPSW 2021 FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning Elnur Gasanov, Ahmed Khaled, Samuel Horváth, Peter Richtárik
NeurIPS 2021 FjORD: Fair and Accurate Federated Learning Under Heterogeneous Targets with Ordered Dropout Samuel Horváth, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, Nicholas Lane
ALT 2020 Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha Are Better Without the Outer Loop Dmitry Kovalev, Samuel Horváth, Peter Richtárik
NeurIPS 2020 Lower Bounds and Optimal Algorithms for Personalized Federated Learning Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik