Krzakala, Florent
66 publications
AISTATS
2025
A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization Capabilities
AISTATS
2025
Fundamental Computational Limits of Weak Learnability in High-Dimensional Multi-Index Models
NeurIPS
2025
Learning with Restricted Boltzmann Machines: Asymptotics of AMP and GD in High Dimensions
NeurIPS
2024
A Phase Transition Between Positional and Semantic Learning in a Solvable Model of Dot-Product Attention
AISTATS
2024
Asymptotic Characterisation of the Performance of Robust Linear Regression in the Presence of Outliers
NeurIPS
2024
Bayes-Optimal Learning of an Extensive-Width Neural Network from Quadratically Many Samples
ICML
2024
Online Learning and Information Exponents: The Importance of Batch Size & Time/Complexity Tradeoffs
ICMLW
2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
NeurIPS
2022
Phase Diagram of Stochastic Gradient Descent in High-Dimensional Two-Layer Neural Networks
NeurIPS
2022
Subspace Clustering in High-Dimensions: Phase Transitions & Statistical-to-Computational Gap
NeurIPS
2021
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime
NeurIPS
2021
Learning Curves of Generic Features Maps for Realistic Datasets with a Teacher-Student Model
NeurIPS
2021
Learning Gaussian Mixtures with Generalized Linear Models: Precise Asymptotics in High-Dimensions
COLT
2020
Asymptotic Errors for High-Dimensional Convex Penalized Linear Regression Beyond Gaussian Matrices
NeurIPS
2020
Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval
NeurIPS
2020
Dynamical Mean-Field Theory for Stochastic Gradient Descent in Gaussian Mixture Classification
NeurIPS
2020
Generalization Error in High-Dimensional Perceptrons: Approaching Bayes Error with Convex Optimization
NeurIPS
2019
Dynamics of Stochastic Gradient Descent for Two-Layer Neural Networks in the Teacher-Student Setup
NeurIPSW
2019
Precise Asymptotics for Phase Retrieval and Compressed Sensing with Random Generative Priors
NeurIPS
2019
Who Is Afraid of Big Bad Minima? Analysis of Gradient-Flow in Spiked Matrix-Tensor Models
NeurIPS
2018
The Committee Machine: Computational to Statistical Gaps in Learning a Two-Layers Neural Network