Takac, Martin

46 publications

ICML 2025 Clipping Improves Adam-Norm and AdaGrad-Norm When the Noise Is Heavy-Tailed Savelii Chezhegov, Klyukin Yaroslav, Andrei Semenov, Aleksandr Beznosikov, Alexander Gasnikov, Samuel Horváth, Martin Takáč, Eduard Gorbunov
CPAL 2025 Collaborative and Efficient Personalization with Mixtures of Adaptors Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč
ICML 2025 FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training Philip Zmushko, Aleksandr Beznosikov, Martin Takáč, Samuel Horváth
CPAL 2025 FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning Nurbek Tastan, Samuel Horváth, Martin Takáč, Karthik Nandakumar
ICLR 2025 From Risk to Uncertainty: Generating Predictive Uncertainty Measures via Bayesian Estimation Nikita Kotelevskii, Vladimir Kondratyev, Martin Takáč, Eric Moulines, Maxim Panov
ICLR 2025 Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity Eduard Gorbunov, Nazarii Tupitsa, Sayantan Choudhury, Alen Aliev, Peter Richtárik, Samuel Horváth, Martin Takáč
ICLR 2025 Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization Yury Demidovich, Petr Ostroukhov, Grigory Malinovsky, Samuel Horváth, Martin Takáč, Peter Richtárik, Eduard Gorbunov
ICLR 2025 OPTAMI: Global Superlinear Convergence of High-Order Methods Dmitry Kamzolov, Artem Agafonov, Dmitry Pasechnyuk, Alexander Gasnikov, Martin Takáč
AISTATS 2025 Revisiting LocalSGD and SCAFFOLD: Improved Rates and Missing Analysis Ruichen Luo, Sebastian U Stich, Samuel Horváth, Martin Takáč
NeurIPS 2025 SVRPBench: A Realistic Benchmark for Stochastic Vehicle Routing Problem Ahmed Heakl, Yahia Salaheldin Shaaban, Salem Lahlou, Martin Takáč, Zangir Iklassov
NeurIPS 2025 Uncovering the Spectral Bias in Diagonal State Space Models Ruben Solozabal, Velibor Bojkovic, Hilal AlQuabeh, Kentaro Inui, Martin Takáč
ICLR 2024 Advancing the Lower Bounds: An Accelerated, Stochastic, Second-Order Method with Optimal Adaptation to Inexactness Artem Agafonov, Dmitry Kamzolov, Alexander Gasnikov, Ali Kavis, Kimon Antonakopoulos, Volkan Cevher, Martin Takáč
IJCAI 2024 Dirichlet-Based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks Nikita Kotelevskii, Samuel Horváth, Karthik Nandakumar, Martin Takác, Maxim Panov
AISTATS 2024 Efficient Conformal Prediction Under Data Heterogeneity Vincent Plassier, Nikita Kotelevskii, Aleksandr Rubashevskii, Fedor Noskov, Maksim Velikanov, Alexander Fishkov, Samuel Horvath, Martin Takac, Eric Moulines, Maxim Panov
NeurIPS 2024 Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations Artem Agafonov, Petr Ostroukhov, Roman Mozhaev, Konstantin Yakovlev, Eduard Gorbunov, Martin Takáč, Alexander Gasnikov, Dmitry Kamzolov
TMLR 2024 PaDPaF: Partial Disentanglement with Partially-Federated GANs Abdulla Jasem Almansoori, Samuel Horváth, Martin Takáč
NeurIPS 2024 Remove That Square Root: A New Efficient Scale-Invariant Version of AdaGrad Sayantan Choudhury, Nazarii Tupitsa, Nicolas Loizou, Samuel Horváth, Martin Takáč, Eduard Gorbunov
AAAI 2024 Robustly Train Normalizing Flows via KL Divergence Regularization Kun Song, Ruben Solozabal, Hao Li, Martin Takác, Lu Ren, Fakhri Karray
NeurIPS 2024 Self-Guiding Exploration for Combinatorial Problems Zangir Iklassov, Yali Du, Farkhad Akimov, Martin Takáč
TMLR 2023 AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods Zheng Shi, Abdurakhmon Sadiev, Nicolas Loizou, Peter Richtárik, Martin Takáč
AISTATS 2023 Algorithm for Constrained Markov Decision Process with Linear Convergence Egor Gladin, Maksim Lavrik-Karmazin, Karina Zainullina, Varvara Rudenko, Alexander Gasnikov, Martin Takac
NeurIPS 2023 Byzantine-Tolerant Methods for Distributed Variational Inequalities Nazarii Tupitsa, Abdulla Jasem Almansoori, Yanlin Wu, Martin Takac, Karthik Nandakumar, Samuel Horváth, Eduard Gorbunov
IJCAI 2023 On the Study of Curriculum Learning for Inferring Dispatching Policies on the Job Shop Scheduling Zangir Iklassov, Dmitrii Medvedev, Ruben Solozabal Ochoa de Retana, Martin Takác
NeurIPSW 2023 Rapid Fitting of Band-Excitation Piezoresponse Force Microscopy Using Physics Constrained Unsupervised Neural Networks Alibek T Kaliyev, Ryan F Forelli, Shuyu Qin, Yichen Guo, Seda Memik, Michael W. Mahoney, Amir Gholami, Nhan Tran, Philip Harris, Martin Takáč, Joshua Agar
ACML 2023 Reinforcement Learning for Solving Stochastic Vehicle Routing Problem Zangir Iklassov, Ikboljon Sobirov, Ruben Solozabal, Martin Takáč
ICLR 2023 SP2 : A Second Order Stochastic Polyak Method Shuang Li, William Joseph Swartworth, Martin Takáč, Deanna Needell, Robert M. Gower
NeurIPS 2023 Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities Aleksandr Beznosikov, Martin Takac, Alexander Gasnikov
NeurIPS 2022 A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander Gasnikov, Peter Richtarik, Martin Takac
ICLR 2022 Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information Majid Jahani, Sergey Rusakov, Zheng Shi, Peter Richtárik, Michael W. Mahoney, Martin Takac
ICML 2022 The Power of First-Order Smooth Optimization for Black-Box Non-Smooth Problems Alexander Gasnikov, Anton Novitskii, Vasilii Novitskii, Farshed Abdukhakimov, Dmitry Kamzolov, Aleksandr Beznosikov, Martin Takac, Pavel Dvurechensky, Bin Gu
AISTATS 2021 SONIA: A Symmetric Blockwise Truncated Optimization Algorithm Majid Jahani, MohammadReza Nazari, Rachael Tappenden, Albert Berahas, Martin Takac
JMLR 2020 A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning Aryan Mokhtari, Alec Koppel, Martin Takac, Alejandro Ribeiro
AISTATS 2020 Efficient Distributed Hessian Free Algorithm for Large-Scale Empirical Risk Minimization via Accumulating Sample Strategy Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takac
IJCAI 2019 Entropy-Penalized Semidefinite Programming Mikhail Krechetov, Jakub Marecek, Yury Maximov, Martin Takác
JMLR 2019 New Convergence Aspects of Stochastic Gradient Algorithms Lam M. Nguyen, Phuong Ha Nguyen, Peter Richtárik, Katya Scheinberg, Martin Takáč, Marten van Dijk
NeurIPS 2018 Reinforcement Learning for Solving the Vehicle Routing Problem MohammadReza Nazari, Afshin Oroojlooy, Lawrence Snyder, Martin Takac
ICML 2018 SGD and Hogwild! Convergence Without the Bounded Gradients Assumption Lam Nguyen, Phuong Ha Nguyen, Marten Dijk, Peter Richtarik, Katya Scheinberg, Martin Takac
ICML 2017 SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
NeurIPS 2016 A Multi-Batch L-BFGS Method for Machine Learning Albert S Berahas, Jorge Nocedal, Martin Takac
JMLR 2016 Distributed Coordinate Descent Method for Learning with Big Data Peter Richtárik, Martin Takáč
JMLR 2016 Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption Chenxin Ma, Rachael Tappenden, Martin Takáč
ICML 2016 Primal-Dual Rates and Certificates Celestine Dünner, Simone Forte, Martin Takac, Martin Jaggi
ICML 2016 SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization Zheng Qu, Peter Richtarik, Martin Takac, Olivier Fercoq
ICML 2015 Adding vs. Averaging in Distributed Primal-Dual Optimization Chenxin Ma, Virginia Smith, Martin Jaggi, Michael Jordan, Peter Richtarik, Martin Takac
NeurIPS 2014 Communication-Efficient Distributed Dual Coordinate Ascent Martin Jaggi, Virginia Smith, Martin Takac, Jonathan Terhorst, Sanjay Krishnan, Thomas Hofmann, Michael I Jordan
ICML 2013 Mini-Batch Primal and Dual Methods for SVMs Martin Takac, Avleen Bijral, Peter Richtarik, Nati Srebro