Gurbuzbalaban, Mert

24 publications

JMLR 2024 High Probability and Risk-Averse Guarantees for a Stochastic Accelerated Primal-Dual Method Yassine Laguel, Necdet Serhat Aybat, Mert Gürbüzbalaban
NeurIPS 2024 High-Probability Complexity Bounds for Stochastic Non-Convex Minimax Optimization Yassine Laguel, Yasa Syed, Necdet Serhat Aybat, Mert Gürbüzbalaban
JMLR 2024 Penalized Overdamped and Underdamped Langevin Monte Carlo Algorithms for Constrained Sampling Mert Gurbuzbalaban, Yuanhan Hu, Lingjiong Zhu
ICML 2023 Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions Anant Raj, Lingjiong Zhu, Mert Gurbuzbalaban, Umut Simsekli
ALT 2023 Algorithmic Stability of Heavy-Tailed Stochastic Gradient Descent on Least Squares Anant Raj, Melih Barsbey, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Şim\scekli
TMLR 2023 Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize Mert Gurbuzbalaban, Yuanhan Hu, Umut Simsekli, Lingjiong Zhu
NeurIPS 2023 Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent Lingjiong Zhu, Mert Gurbuzbalaban, Anant Raj, Umut Simsekli
JMLR 2022 Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks Alireza Fallah, Mert Gürbüzbalaban, Asuman Ozdaglar, Umut Şimşekli, Lingjiong Zhu
NeurIPS 2022 SAPD+: An Accelerated Stochastic Method for Nonconvex-Concave Minimax Problems Xuan Zhang, Necdet Serhat Aybat, Mert Gurbuzbalaban
AISTATS 2021 Fractional Moment-Preserving Initialization Schemes for Training Deep Neural Networks Mert Gurbuzbalaban, Yuanhan Hu
ICML 2021 Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris Holmes, Mert Gurbuzbalaban, Umut Simsekli
NeurIPS 2021 Convergence Rates of Stochastic Gradient Descent Under Infinite Noise Variance Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A Erdogdu
JMLR 2021 Decentralized Stochastic Gradient Langevin Dynamics and Hamiltonian Monte Carlo Mert Gürbüzbalaban, Xuefeng Gao, Yuanhan Hu, Lingjiong Zhu
NeurIPS 2021 Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms Alexander Camuto, George Deligiannidis, Murat A Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
ICML 2021 The Heavy-Tail Phenomenon in SGD Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
NeurIPS 2020 Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization Xuefeng Gao, Mert Gurbuzbalaban, Lingjiong Zhu
AISTATS 2020 DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate Saeed Soori, Konstantin Mishchenko, Aryan Mokhtari, Maryam Mehri Dehnavi, Mert Gurbuzbalaban
ICML 2020 Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum Under Heavy-Tailed Gradient Noise Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gurbuzbalaban
NeurIPS 2020 IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gurbuzbalaban, Stefanie Jegelka, Hongzhou Lin
ICML 2019 A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks Umut Simsekli, Levent Sagun, Mert Gurbuzbalaban
NeurIPS 2019 A Universally Optimal Multistage Accelerated Stochastic Gradient Method Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar
ICML 2019 Accelerated Linear Convergence of Stochastic Momentum Methods in Wasserstein Distances Bugra Can, Mert Gurbuzbalaban, Lingjiong Zhu
NeurIPS 2019 First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise Thanh Huy Nguyen, Umut Simsekli, Mert Gurbuzbalaban, Gaël Richard
NeurIPS 2017 When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent Mert Gurbuzbalaban, Asuman Ozdaglar, Pablo A Parrilo, Nuri Vanli