Erdogdu, Murat A.

52 publications

NeurIPS 2025 A Geometric Analysis of PCA Ayoub El Hanchi, Murat A Erdogdu, Chris J. Maddison
ICML 2025 Categorical Distributional Reinforcement Learning with Kullback-Leibler Divergence: Convergence and Asymptotics Tyler Kastner, Mark Rowland, Yunhao Tang, Murat A Erdogdu, Amir-Massoud Farahmand
NeurIPS 2025 Distributional Training Data Attribution: What Do Influence Functions Sample? Bruno Kacper Mlodozeniec, Isaac Reid, Samuel Power, David Krueger, Murat A Erdogdu, Richard E. Turner, Roger Baker Grosse
NeurIPS 2025 From Information to Generative Exponent: Learning Rate Induces Phase Transitions in SGD Konstantinos Christopher Tsiolis, Alireza Mousavi-Hosseini, Murat A Erdogdu
ICLR 2025 Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics Alireza Mousavi-Hosseini, Denny Wu, Murat A Erdogdu
NeurIPS 2025 Learning Quadratic Neural Networks in High Dimensions: SGD Dynamics and Scaling Laws Gerard Ben Arous, Murat A Erdogdu, Nuri Mert Vural, Denny Wu
ICLR 2025 Robust Feature Learning for Multi-Index Models in High Dimensions Alireza Mousavi-Hosseini, Adel Javanmard, Murat A Erdogdu
NeurIPS 2025 When Do Transformers Outperform Feedforward and Recurrent Networks? a Statistical Perspective Alireza Mousavi-Hosseini, Clayton Sanford, Denny Wu, Murat A Erdogdu
NeurIPS 2024 A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers Ye He, Alireza Mousavi-Hosseini, Krishnakumar Balasubramanian, Murat A. Erdogdu
TMLR 2024 Beyond Labeling Oracles - What Does It Mean to Steal ML Models? Avital Shafran, Ilia Shumailov, Murat A Erdogdu, Nicolas Papernot
ICMLW 2024 Learning Multi-Index Models with Neural Networks via Mean-Field Langevin Dynamics Alireza Mousavi-Hosseini, Denny Wu, Murat A Erdogdu
JMLR 2024 Mean-Square Analysis of Discretized Itô Diffusions for Heavy-Tailed Sampling Ye He, Tyler Farghly, Krishnakumar Balasubramanian, Murat A. Erdogdu
NeurIPS 2024 On the Efficiency of ERM in Feature Learning Ayoub El Hanchi, Chris J. Maddison, Murat A. Erdogdu
COLT 2024 Pruning Is Optimal for Learning Sparse Features in High-Dimensions Nuri Mert Vural, Murat A Erdogdu
NeurIPSW 2024 Robust Feature Learning for Multi-Index Models in High Dimensions Alireza Mousavi-Hosseini, Adel Javanmard, Murat A Erdogdu
COLT 2024 Sampling from the Mean-Field Stationary Distribution Yunbum Kook, Matthew S. Zhang, Sinho Chewi, Murat A. Erdogdu, Mufan Li
NeurIPS 2023 Distributional Model Equivalence for Risk-Sensitive Reinforcement Learning Tyler Kastner, Murat A Erdogdu, Amir-massoud Farahmand
NeurIPS 2023 Gradient-Based Feature Learning Under Structured Data Alireza Mousavi-Hosseini, Denny Wu, Taiji Suzuki, Murat A Erdogdu
COLT 2023 Improved Discretization Analysis for Underdamped Langevin Monte Carlo Shunshi Zhang, Sinho Chewi, Mufan Li, Krishna Balasubramanian, Murat A. Erdogdu
NeurIPS 2023 Learning in the Presence of Low-Dimensional Structure: A Spiked Random Matrix Perspective Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu
ICLR 2023 Neural Networks Efficiently Learn Low-Dimensional Representations with SGD Alireza Mousavi-Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, Murat A Erdogdu
NeurIPS 2023 Optimal Excess Risk Bounds for Empirical Risk Minimization on $p$-Norm Linear Regression Ayoub El Hanchi, Murat A Erdogdu
COLT 2023 Towards a Complete Analysis of Langevin Monte Carlo: Beyond Poincaré Inequality Alireza Mousavi-Hosseini, Tyler K. Farghly, Ye He, Krishna Balasubramanian, Murat A. Erdogdu
AISTATS 2022 Convergence of Langevin Monte Carlo in Chi-Squared and Rényi Divergence Murat A. Erdogdu, Rasa Hosseinzadeh, Shunshi Zhang
COLT 2022 Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev Sinho Chewi, Murat A Erdogdu, Mufan Li, Ruoqi Shen, Shunshi Zhang
AAAI 2022 Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings Matthew Shunshi Zhang, Murat A. Erdogdu, Animesh Garg
NeurIPS 2022 Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers Sejun Park, Umut Simsekli, Murat A Erdogdu
NeurIPS 2022 High-Dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang
COLT 2022 Mirror Descent Strikes Again: Optimal Stochastic Convex Optimization Under Infinite Noise Variance Nuri Mert Vural, Lu Yu, Krishna Balasubramanian, Stanislav Volgushev, Murat A Erdogdu
NeurIPSW 2022 Neural Networks Efficiently Learn Low-Dimensional Representations with SGD Alireza Mousavi-Hosseini, Sejun Park, Manuela Girotti, Ioannis Mitliagkas, Murat A Erdogdu
COLT 2022 Towards a Theory of Non-Log-Concave Sampling:First-Order Stationarity Guarantees for Langevin Monte Carlo Krishna Balasubramanian, Sinho Chewi, Murat A Erdogdu, Adil Salim, Shunshi Zhang
ICLR 2022 Understanding the Variance Collapse of SVGD in High Dimensions Jimmy Ba, Murat A Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, Tianzong Zhang
NeurIPS 2021 An Analysis of Constant Step Size SGD in the Non-Convex Regime: Asymptotic Normality and Bias Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A Erdogdu
NeurIPS 2021 Convergence Rates of Stochastic Gradient Descent Under Infinite Noise Variance Hongjian Wang, Mert Gurbuzbalaban, Lingjiong Zhu, Umut Simsekli, Murat A Erdogdu
NeurIPS 2021 Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms Alexander Camuto, George Deligiannidis, Murat A Erdogdu, Mert Gurbuzbalaban, Umut Simsekli, Lingjiong Zhu
NeurIPS 2021 Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks Melih Barsbey, Milad Sefidgaran, Murat A Erdogdu, Gaël Richard, Umut Simsekli
NeurIPS 2021 Manipulating SGD with Data Ordering Attacks I Shumailov, Zakhar Shumaylov, Dmitry Kazhdan, Yiren Zhao, Nicolas Papernot, Murat A Erdogdu, Ross J Anderson
NeurIPS 2021 On Empirical Risk Minimization with Dependent and Heavy-Tailed Data Abhishek Roy, Krishnakumar Balasubramanian, Murat A Erdogdu
COLT 2021 On the Convergence of Langevin Monte Carlo: The Interplay Between Tail Growth and Smoothness Murat A Erdogdu, Rasa Hosseinzadeh
NeurIPS 2020 Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks Umut Simsekli, Ozan Sener, George Deligiannidis, Murat A Erdogdu
NeurIPS 2020 On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method Ye He, Krishnakumar Balasubramanian, Murat A Erdogdu
COLT 2019 Normal Approximation for Stochastic Gradient Descent via Non-Asymptotic Rates of Martingale CLT Andreas Anastasiou, Krishnakumar Balasubramanian, Murat A. Erdogdu
NeurIPS 2019 Stochastic Runge-Kutta Accelerates Langevin Monte Carlo and Beyond Xuechen Li, Yi Wu, Lester Mackey, Murat A Erdogdu
NeurIPS 2018 Global Non-Convex Optimization with Discretized Diffusions Murat A Erdogdu, Lester Mackey, Ohad Shamir
NeurIPS 2017 Inference in Graphical Models via Semidefinite Programming Hierarchies Murat A Erdogdu, Yash Deshpande, Andrea Montanari
NeurIPS 2017 Robust Estimation of Neural Signals in Calcium Imaging Hakan Inan, Murat A Erdogdu, Mark Schnitzer
AISTATS 2016 Maximum Likelihood for Variance Estimation in High-Dimensional Linear Models Lee H. Dicker, Murat A. Erdogdu
JMLR 2016 Newton-Stein Method: An Optimization Method for GLMs via Stein's Lemma Murat A. Erdogdu
NeurIPS 2016 Scaled Least Squares Estimator for GLMs in Large-Scale Problems Murat A Erdogdu, Lee H Dicker, Mohsen Bayati
NeurIPS 2015 Convergence Rates of Sub-Sampled Newton Methods Murat A Erdogdu, Andrea Montanari
NeurIPS 2015 Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma Murat A Erdogdu
NeurIPS 2013 Estimating LASSO Risk and Noise Level Mohsen Bayati, Murat A Erdogdu, Andrea Montanari