Simsekli, Umut

53 publications

ALT 2025 A PAC-Bayesian Link Between Generalisation and Flat Minima Maxime Haddouche, Paul Viallard, Umut Simsekli, Benjamin Guedj
NeurIPS 2025 Algorithm- and Data-Dependent Generalization Bounds for Diffusion Models Benjamin Dupuis, Dario Shariatian, Maxime Haddouche, Alain Oliviero Durmus, Umut Simsekli
ICLR 2025 Heavy-Tailed Diffusion with Denoising Levy Probabilistic Models Dario Shariatian, Umut Simsekli, Alain Oliviero Durmus
ICML 2025 The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training Fabian Schaipp, Alexander Hägele, Adrien Taylor, Umut Simsekli, Francis Bach
TMLR 2025 Tracking the Median of Gradients with a Stochastic Proximal Point Method Fabian Schaipp, Guillaume Garrigos, Umut Simsekli, Robert M. Gower
ICML 2024 Generalization Bounds for Heavy-Tailed SDEs Through the Fractional Fokker-Planck Equation Benjamin Dupuis, Umut Simsekli
ICML 2024 Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli
NeurIPS 2024 Piecewise Deterministic Generative Models Andrea Bertazzi, Dario Shariatian, Umut Simsekli, Eric Moulines, Alain Durmus
NeurIPS 2024 Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms Rayna Andreeva, Benjamin Dupuis, Rik Sarkar, Tolga Birdal, Umut Şimşekli
JMLR 2024 Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets Benjamin Dupuis, Paul Viallard, George Deligiannidis, Umut Simsekli
ICML 2023 Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions Anant Raj, Lingjiong Zhu, Mert Gurbuzbalaban, Umut Simsekli
NeurIPS 2023 Approximate Heavy Tails in Offline (Multi-Pass) Stochastic Gradient Descent Kruno Lehman, Alain Durmus, Umut Simsekli
TMLR 2023 Cyclic and Randomized Stepsizes Invoke Heavier Tails in SGD than Constant Stepsize Mert Gurbuzbalaban, Yuanhan Hu, Umut Simsekli, Lingjiong Zhu
NeurIPS 2023 Efficient Sampling of Stochastic Differential Equations with Positive Semi-Definite Models Anant Raj, Umut Simsekli, Alessandro Rudi
ICML 2023 Generalization Bounds Using Data-Dependent Fractal Dimensions Benjamin Dupuis, George Deligiannidis, Umut Simsekli
COLT 2023 Generalization Guarantees via Algorithm-Dependent Rademacher Complexity Sarah Sachs, Tim Erven, Liam Hodgkinson, Rajiv Khanna, Umut Şimşekli
NeurIPS 2023 Learning via Wasserstein-Based High Probability Generalisation Bounds Paul Viallard, Maxime Haddouche, Umut Simsekli, Benjamin Guedj
NeurIPSW 2023 Learning via Wasserstein-Based High Probability Generalisation Bounds Paul Viallard, Maxime Haddouche, Umut Simsekli, Benjamin Guedj
NeurIPSW 2023 Neural Network Compression with Heavy-Tailed SGD Yijun Wan, Abdellatif Zaidi, Umut Simsekli
NeurIPSW 2023 Robust Gradient Estimation in the Presence of Heavy-Tailed Noise Fabian Schaipp, Umut Simsekli, Robert M. Gower
NeurIPS 2023 Uniform-in-Time Wasserstein Stability Bounds for (Noisy) Stochastic Gradient Descent Lingjiong Zhu, Mert Gurbuzbalaban, Anant Raj, Umut Simsekli
NeurIPS 2022 Chaotic Regularization and Heavy-Tailed Limits for Deterministic Gradient Descent Soon Hoe Lim, Yijun Wan, Umut Simsekli
ICML 2022 Generalization Bounds Using Lower Tail Exponents in Stochastic Optimizers Liam Hodgkinson, Umut Simsekli, Rajiv Khanna, Michael Mahoney
NeurIPS 2022 Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers Sejun Park, Umut Simsekli, Murat A Erdogdu
COLT 2022 Rate-Distortion Theoretic Generalization Bounds for Stochastic Learning Algorithms Milad Sefidgaran, Amin Gohari, Gaël Richard, Umut Simsekli
JMLR 2022 Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks Alireza Fallah, Mert Gürbüzbalaban, Asuman Ozdaglar, Umut Şimşekli, Lingjiong Zhu
ICML 2021 Asymmetric Heavy Tails and Implicit Bias in Gaussian Noise Injections Alexander Camuto, Xiaoyu Wang, Lingjiong Zhu, Chris Holmes, Mert Gurbuzbalaban, Umut Simsekli
NeurIPS 2021 Convergence Rates of Stochastic Gradient Descent Under Infinite Noise Variance Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A Erdogdu
NeurIPS 2021 Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections Kimia Nadjahi, Alain Durmus, Pierre E Jacob, Roland Badeau, Umut Simsekli
NeurIPS 2021 Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms Alexander Camuto, George Deligiannidis, Murat A Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
NeurIPS 2021 Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks Melih Barsbey, Milad Sefidgaran, Murat A Erdogdu, Gaël Richard, Umut Simsekli
NeurIPS 2021 Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks Tolga Birdal, Aaron Lou, Leonidas Guibas, Umut Simsekli
ICML 2021 Relative Positional Encoding for Transformers with Linear Complexity Antoine Liutkus, Ondřej Cı́fka, Shih-Lun Wu, Umut Simsekli, Yi-Hsuan Yang, Gael Richard
ICML 2021 The Heavy-Tail Phenomenon in SGD Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
NeurIPS 2020 Explicit Regularisation in Gaussian Noise Injections Alexander Camuto, Matthew Willetts, Umut Simsekli, Stephen J. Roberts, Chris C Holmes
ICML 2020 Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum Under Heavy-Tailed Gradient Noise Umut Simsekli, Lingjiong Zhu, Yee Whye Teh, Mert Gurbuzbalaban
NeurIPS 2020 Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks Umut Simsekli, Ozan Sener, George Deligiannidis, Murat A Erdogdu
NeurIPS 2020 Quantitative Propagation of Chaos for SGD in Wide Neural Networks Valentin De Bortoli, Alain Durmus, Xavier Fontaine, Umut Simsekli
NeurIPS 2020 Statistical and Topological Properties of Sliced Probability Divergences Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Simsekli
ICML 2019 A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks Umut Simsekli, Levent Sagun, Mert Gurbuzbalaban
NeurIPS 2019 Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance Kimia Nadjahi, Alain Durmus, Umut Simsekli, Roland Badeau
NeurIPS 2019 First Exit Time Analysis of Stochastic Gradient Descent Under Heavy-Tailed Gradient Noise Thanh Huy Nguyen, Umut Simsekli, Mert Gurbuzbalaban, Gaël Richard
NeurIPS 2019 Generalized Sliced Wasserstein Distances Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo Rohde
ICML 2019 Non-Asymptotic Analysis of Fractional Langevin Monte Carlo for Non-Convex Optimization Than Huy Nguyen, Umut Simsekli, Gael Richard
ICML 2019 Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions Antoine Liutkus, Umut Simsekli, Szymon Majewski, Alain Durmus, Fabian-Robert Stöter
ICML 2018 Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization Umut Simsekli, Cagatay Yildiz, Than Huy Nguyen, Taylan Cemgil, Gael Richard
NeurIPS 2018 Bayesian Pose Graph Optimization via Bingham Distributions and Tempered Geodesic MCMC Tolga Birdal, Umut Simsekli, Mustafa Onur Eken, Slobodan Ilic
ICML 2017 Fractional Langevin Monte Carlo: Exploring Levy Driven Stochastic Differential Equations for Markov Chain Monte Carlo Umut Şimşekli
NeurIPS 2017 Learning the Morphology of Brain Signals Using Alpha-Stable Convolutional Sparse Coding Mainak Jas, Tom Dupré la Tour, Umut Simsekli, Alexandre Gramfort
NeurIPS 2016 Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo Alain Durmus, Umut Simsekli, Eric Moulines, Roland Badeau, Gaël Richard
ICML 2016 Stochastic Quasi-Newton Langevin Monte Carlo Umut Simsekli, Roland Badeau, Taylan Cemgil, Gaël Richard
ICML 2013 Learning the Beta-Divergence in Tweedie Compound Poisson Matrix Factorization Models Umut Simsekli, Ali Taylan Cemgil, Yusuf Kenan Yilmaz
NeurIPS 2011 Generalised Coupled Tensor Factorisation Kenan Y. Yılmaz, Ali T. Cemgil, Umut Simsekli