Zdeborova, Lenka
56 publications
ICML
2025
Counting in Small Transformers: The Delicate Interplay Between Attention and Feed-Forward Layers
AISTATS
2025
Fundamental Computational Limits of Weak Learnability in High-Dimensional Multi-Index Models
NeurIPS
2025
Learning with Restricted Boltzmann Machines: Asymptotics of AMP and GD in High Dimensions
NeurIPS
2024
A Phase Transition Between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention
NeurIPS
2024
Bayes-Optimal Learning of an Extensive-Width Neural Network from Quadratically Many Samples
NeurIPS
2022
Phase Diagram of Stochastic Gradient Descent in High-Dimensional Two-Layer Neural Networks
NeurIPS
2022
Subspace Clustering in High-Dimensions: Phase Transitions & Statistical-to-Computational Gap
NeurIPS
2021
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
NeurIPS
2021
Learning Curves of Generic Features Maps for Realistic Datasets with a Teacher-Student Model
NeurIPS
2021
Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-Dimensions
NeurIPS
2020
Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval
NeurIPS
2020
Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification
NeurIPS
2020
Generalization Error in High-Dimensional Perceptrons: Approaching Bayes Error with Convex Optimization
NeurIPS
2020
Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions
NeurIPS
2019
Dynamics of Stochastic Gradient Descent for Two-Layer Neural Networks in the Teacher-Student Setup
NeurIPSW
2019
Precise Asymptotics for Phase Retrieval and Compressed Sensing with Random Generative Priors
NeurIPS
2019
Who Is Afraid of Big Bad Minima? Analysis of Gradient-Flow in Spiked Matrix-Tensor Models
NeurIPS
2018
The Committee Machine: Computational to Statistical Gaps in Learning a Two-Layers Neural Network